Pact: Trustworthy Coordination for Multi-Agentic Ecosystems
Multi-agentic ecosystems are becoming commonplace, but how can we trust them? We propose Pact, a formal coordination language that unifies ideas from game theory, distributed systems and cryptography to enable trustworthy multi-agent coordination.
ReadExoPredicator: Abstracting Time and State for Robot Planning
We introduce ExoPredicator, a system that learns abstract world models for robot planning. By abstracting state, time, and both endogenous and exogenous causal processes, ExoPredicator enables robots to quickly learn how dynamic environments work and plan efficiently in them.
ReadBuilding an Unverified Compiler with Agents
Four agents spent 14 days and 93,000 lines of Lean building a verified JS-to-WASM compiler from scratch. The compiler ran; the proofs didn’t close.
ReadAutumnBench: World Model Learning in Humans and AI
We’re releasing a new version of Autumn with human baseline results, AI performance comparisons, and an interactive benchmark for world model discovery. This release includes the MARA protocol and provides a public platform for testing causal reasoning capabilities.
ReadBasis: Designing a New Kind of AI Research Organization
Written in 2022, this founding document defines Basis’ vision and thesis of how to build a new kind of research organization.
ReadProject MARA Preview: Modeling, Abstraction, and Reasoning Agents
Project MARA aims to develop AI systems capable of performing everyday scientific discovery through active experimentation and abstract reasoning. The project will create systems that can discover and apply causal models across diverse domains, from physical robotics to digital interfaces.
ReadNeuroAI for AI Safety
Research: Patrick Mineault, Niccolò Zanichelli, Joanne Zichen Peng, et al.
|November 27, 2024
Basis contributed to a new technical roadmap, “NeuroAI for AI Safety,” from Amaranth Foundation. The roadmap aims to make AI systems safer by understanding and implementing the brain’s approach to intelligent behavior.
ReadLinking Cognitive Strategy, Neural Mechanism, and Movement Statistics in Group Foraging Behaviors
In a new paper, we combined cognitive neuroscience with statistical methods to model group foraging behavior. Our ongoing work aims to construct a unifying framework that allows researchers to analyze complex group behaviors across different species and environments.
ReadMetaCOG: Enhancing AI Vision with Human-Inspired Metacognition
Research: Marlene Berke, Zhangir Azerbayev, Mario Belledonne, et al.
|July 16, 2024
In collaboration with Marlene Berke and the Computational Social Cognition Lab at Yale, we’re introducing MetaCOG, a probabilistic model that can learn a metacognitive model of a neural object detector and use it to improve the detector’s accuracy without feedback. This represents a step towards building AI systems that can go beyond representing their inputs and also represent their own thought processes.
ReadLinking Algorithms to Neural Mechanisms in Predictive Memory Models
Research: Ching Fang, Dmitriy Aronov, Larry Abbott, et al.
|March 22, 2023
In a new paper, we demonstrate biologically-plausible neural network models that can compute important features of predictive learning and memory systems. Our results suggest that these features are more accessible in neural circuits than previously thought, and can support a broad range of cognitive functions. The work achieves something that has proved difficult in AI research: bridging a well-defined computational function with its neural mechanism.
ReadAutumn: Causal Discovery Through Program Synthesis
Research: Ria Das, Armando Solar-Lezama, Joshua Tenenbaum, et al.
|February 1, 2023
We’re introducing AutumnSynth, an algorithm that synthesizes the source code of simple 2D video games from a small amount of observed video data. This represents a step forward toward systems that can perform causal theory discovery in real-world environments.
Read