Param AI IoT And RoboticsActive ResearchProject Octopus Cognitive States in Physical LearningProject R.O.S.E. Robotic Operating System for EnvironmentsOpen by Default IRB-Compliant DatasetInfrastructure GapBengaluru, India Going Against The FlowIntelligence Augmented — Not Replaced Param AI IoT And RoboticsActive ResearchProject Octopus Cognitive States in Physical LearningProject R.O.S.E. Robotic Operating System for EnvironmentsOpen by Default IRB-Compliant DatasetInfrastructure GapBengaluru, India Going Against The FlowIntelligence Augmented — Not Replaced
01 Project Octopus
Cognitive States / Physical Environments
Project 01 / Octopus
OCTOPUS
Cognitive States in Physical Learning Environments

Learning science has established clearly that cognitive state — where a learner actually is in their thinking at a given moment — determines the quality of what they take away from an experience more than the content they encounter. Adaptive and intelligent tutoring systems have pursued this insight for decades, with meaningful results in structured digital environments. The open problem is the physical world.

That is the problem Project Octopus is working on. Not adaptive learning as a category — that literature is deep and we build on it — but the specific infrastructure gap between what learning science knows and what can currently be deployed in a physical experiential environment at scale, without a skilled human mediator at every point of contact.

Open Questions
Q1Can cognitive stage be reliably detected from non-invasive behavioural signals in physical environments?
Q2How does conversational AI dialogue style affect depth of thinking and persistence after failure?
Q3Where does personalisation produce genuine understanding gains versus longer sessions?
Q4How do group dynamics in shared spaces produce different cognitive outcomes than individual ones?
Phase
Infrastructure
Output
Dataset + System
Protocol
IRB-Compliant
Availability
Open Access
Sensor Array NLP Engine Learning Model Data Pipeline Physical Env Multimodal Input Analytics Behavioural Sigs OCTOPUS SYS
02 Project R.O.S.E.
Robotic Operating System for Environments
Deployment Interface Non-technical operator layer Visitor Awareness Behavioural signals / no biometrics NLP + Interaction Multilingual / unscripted / real input Navigation Module High-footfall unstructured environments R . O . S . E . HARDWARE ABSTRACTION LAYER The transferable layer that doesn't exist yet ROSE SITS HERE Hardware ROBOT A ROBOT B ROBOT C ANY HW ANY ROBOT → SYSTEM ARCHITECTURE / ROSE
Project 02 / R.O.S.E.
R.O.S.E.
Robotic Operating System for Environments
R — Robotic   O — Operating   S — System   E — Environments

The institutional deployment problem in public robotics is well documented and consistently unsolved. Social robots have been deployed in public experience environments for over two decades — the Smithsonian, CosmoCaixa in Barcelona, science museums across Japan. Each deployment required a dedicated engineering team, a proprietary software stack locked to a single manufacturer, and sustained institutional investment to keep running. None of that work transferred between institutions.

Project ROSE is building that layer. An open, modular operating system for deploying robots in public experience environments.

Open Questions
Q1Can a hardware abstraction layer make any supported robot interchangeable without rewriting above it?
Q2Can navigation be validated for high-footfall unstructured public environments where existing stacks fail?
Q3Is natural language interaction with real unscripted multilingual input reliable enough as a default mode?
Q4Is visitor awareness from behavioural signals alone sufficient without biometric data or stored profiles?
Q5Can a non-technical facilitator deploy and recover the system without engineering support?
Phase
Architecture
Output
Open Stack
Lead
Lab Head
License
Open by Default