unit-code
MOBAS is a responsive architectural system that reimagines how spaces adapt to contemporary work and study needs. Unlike static configurations, it integrates structural flexibility, digital intelligence, and user participation to create environments that evolve in real time.
Developed within the Elastic Robotic Structures (ERS) framework, MOBAS uses a Bending-Active Tensile-Hybrid (BATH) system for real-time shape transformation through robotic actuation. Moving beyond pre-programmed automation, it embeds Artificial Intelligence, enabling occupants to co-design their environment and express preferences through intuitive digital interfaces.
Using natural language processing (NLP) and sentiment analysis via large language models (LLMs), MOBAS interprets commands, tone, and context through text or voice, adjusting spatial form and environmental conditions to match functional and emotional needs.
As a modular, reconfigurable system, MOBAS showcases the potential of large-scale responsive architecture. Suitable for exhibitions, co-working hubs, or pop-ups, it advances adaptive, participatory design by proposing spaces that listen, learn, and evolve with their users.
Part of RC2, MOBAS expands kinetic design research, inspired by Frei Otto, using AI-controlled fabric structures to create flexible, adaptive workspaces.
An elastic knot was fabricated in a figure-eight loop, revealing the material’s flexibility and its potential for shaping space.
The system uses a sliding motor for fabric movement and threaded mechanisms for volume control, enabling precise, responsive actuation.
Two spool prototypes were tested: the 15 mm spool offered slower control, while the 45 mm spool enabled rapid actuation under 15 seconds.
Three sliding motor prototypes were tested: one overloaded, two were slow, and the final version with rubber rollers and bearings achieved smooth, reliable actuation.
LED control tests let users issue task-based commands like “I want to work,” prompting the system to adjust lighting for focus and concentration.
After LED tests, the study focused on motor control, training AI to operate the pod’s motors for more responsive spatial adaptation.
The pod engages users through a mobile app, where commands can be given by text or speech. Interaction feels like a conversation, as the pod responds in real time by adapting its light, shade, and form to user intent.
Physical behaviour was observed and replicated through digital simulations, creating a library of spatial states. This archive illustrates how the system can transform and adapt, forming the basis for designing responsive environments
Combining modules created spatial divisions for varied functions, showing how modular systems enable adaptable, dynamic environments.