What we’re about
We are a group of startup engineers, research scientists, computational linguists, mathematicians, philosophers, and others interested in understanding the meaning of text, reasoning, and human intent through technology. We want to apply our understanding to building new businesses and improving overall human experience in the modern connected world. The MIND Stack explained: mind.wtf.
This is a technical AI meetup: we build systems with Machine Learning on top of Data Pipelines, and concern ourselves with the stuff we can try in open source, learn, improve, and model human behavior in industry for practical results.
The advisory board for this meetup is Cicero Institute (Cicero.ai), and its conferences are AI.vision and self.driving.cars. We like specific technical problems (self-driving cars) and the way they inform better higher-level inference of the future of AI (AI.vision).
Upcoming events (1)
See all- [Luma registration required!] LLM + Graph Database for RAG and RAG with Ray!Anyscale, San Francisco , CA
You have to register on Luma to attend!
We have two great talks from the AI Alliance members Neo4j and the host Anyscale!
1. LLM + Graph Database for RAG
LLMs can provide answers that sound realistic to almost any question, even if those answers are entirely made up. To anchor an LLM in reality and mitigate the risk of generating false information or unauthorized access to sensitive data, try incorporating a Knowledge Graph. This prevents the model from producing inaccurate responses and ensures a more reliable and secure outcome.
This presentation will show you the benefits of Graph Databases over regular databases and how to use GenAI with RAG to eliminate hallucinations, enforce security, and improve accuracy. We will also discuss why a vector index plus Knowledge Graph provides better, smarter, faster results than a pure vector database.Andreas Kollegger is a technological humanist. Starting at NASA, Andreas designed systems from scratch to support science missions. Then in Zambia, he built medical informatics systems to apply technology for social good. Now with Neo4j, he is democratizing graph databases to validate and extend our intuitions about how the world works. Everything is connected.
2. Build RAG-based large language model applications with Ray and KubeRay on Kubernetes
Large Language Models (LLMs) have changed the way we interact with information. A base LLM is only aware of the information it was trained on. Retrieval augmented generation (RAG) can address this issue by providing context of additional data sources. In this session, we’ll build a RAG-based LLM application that incorporates external data sources to augment an OSS LLM. We’ll show how to scale the workload with Ray and Kubernetes, and showcase a chatbot agent that gives factual answers.Kai-Hsun Chen is a software engineer on the Ray Core team at Anyscale and the primary maintainer of KubeRay. He is an open-source enthusiast, as well as a committer and PMC member of Apache Submarine. Additionally, he is an interdisciplinary researcher with publications spanning electrical design automation, distributed computing, and system reliability.