agents.components.memory¶
Module Contents¶
Classes¶
Spatio-temporal memory component powered by eMEM. |
API¶
- class agents.components.memory.Memory(*, layers: List[agents.ros.MemLayer], position: agents.ros.Topic, model_client: Optional[agents.clients.model_base.ModelClient] = None, embedding_client: Optional[agents.clients.model_base.ModelClient] = None, config: Optional[agents.config.MemoryConfig] = None, trigger: Union[agents.ros.Topic, List[agents.ros.Topic], float, agents.ros.Event] = 10.0, component_name: str, **kwargs)¶
Bases:
agents.components.component_base.ComponentSpatio-temporal memory component powered by eMEM.
Encodes perception layer data (text descriptions from vlms, detections) into a graph-based spatio-temporal memory indexed by meaning, location, and time. Provides 10 retrieval tools as component actions and supports episode-based memory consolidation.
This component uses real-world coordinates from Odometry directly and provides consolidation, entity tracking, and structured retrieval tools instead of flat vector DB storage.
- Parameters:
layers (list[MemLayer]) – Input layers to encode. Each layer subscribes to a topic whose callback produces a string via
_get_ui_content. Layers withis_internal_state=Trueare written viaadd_body_stateand retrieved through thebody_statustool; all other layers are perception layers retrieved throughsemantic_searchand friends.position (Topic) – Odometry topic providing the robot’s current position.
model_client (Optional[ModelClient]) – Model client for memory consolidation (summarization, entity extraction). If not provided, consolidation uses simple text concatenation.
embedding_client (Optional[ModelClient]) – Model client for generating embeddings (e.g. OllamaClient with an embedding model). If not provided, falls back to sentence-transformers.
config (Optional[MemoryConfig]) – Memory configuration.
trigger (Union[Topic, list[Topic], float, Event]) – Trigger for the execution step (frequency in Hz, topic, or event).
component_name (str) – ROS node name for this component.
Example usage:
position = Topic(name="odom", msg_type="Odometry") detections = Topic(name="detections", msg_type="Detections") room_type = Topic(name="room_type", msg_type="String") battery = Topic(name="battery_state", msg_type="BatteryState") layer1 = MemLayer(subscribes_to=detections, temporal_change=True) layer2 = MemLayer(subscribes_to=room_type, resolution_multiple=3) layer3 = MemLayer(subscribes_to=battery, is_internal_state=True) memory = Memory( layers=[layer1, layer2, layer3], position=position, model_client=llama_client, embedding_client=embed_client, config=MemoryConfig(db_path="/tmp/robot_memory.db"), trigger=15.0, component_name="memory", )
- custom_on_configure()¶
Initialize eMEM and client connections.
- custom_on_deactivate()¶
Close eMEM and deinitialize clients.
- inspect_component() str¶
Return component info including configured layers.
Appends a
Perception layers:section and, if any are configured, anInternal-state layers:section. A consumer like Cortex can read this to learn which layer tags observations get stored under. Useful when planning retrieval calls that take alayerfilter: perception layers are queried viasemantic_search/spatial_query/locate, while internal-state layers are queried viabody_status.
- store() None¶
Explicitly trigger storage of current layer data.
- store_specific_memory(content: str, layer_name: str = 'agent_notes', x: Optional[float] = None, y: Optional[float] = None, z: Optional[float] = None) bool¶
Store an arbitrary piece of text at a given (or current) position.
- Parameters:
content – Text to record.
layer_name – Layer tag to store under. Defaults to
agent_notes.x – Optional X in world-frame meters. If omitted, current odometry is used.
y – Optional Y in world-frame meters. If omitted, current odometry is used.
z – Optional Z in meters. If omitted, current odometry is used.
- Returns:
True if the note was stored, False if position was unavailable.
- start_episode(name: str) str¶
Start a named episode.
- end_episode() str¶
End the active episode and trigger consolidation.
- semantic_search(**kwargs) str¶
Search memory by meaning.
- spatial_query(**kwargs) str¶
Find observations within a radius of a point.
- temporal_query(**kwargs) str¶
Find observations in a time range.
- episode_summary(**kwargs) str¶
Get summary of one or more episodes.
- get_current_context(**kwargs) str¶
Get situational awareness.
- search_gists(**kwargs) str¶
Search consolidated memory summaries.
- entity_query(**kwargs) str¶
Find known entities.
- locate(**kwargs) str¶
Find the spatial location of a concept.
- recall(**kwargs) str¶
Recall everything known about a concept.
- body_status(**kwargs) str¶
Get latest body/internal state readings.
- register_tools_on(llm, tools: Optional[List[str]] = None, send_tool_response_to_model: bool = True) None¶
Register eMEM retrieval tools on an LLM component for tool calling.
- Parameters:
llm (LLM) – The LLM or Cortex component to register tools on.
tools (Optional[list[str]]) – Optional subset of tool names to register (default: all 10).
send_tool_response_to_model (bool) – Whether tool results are sent back to the model for a follow-up response.
Example usage:
memory.register_tools_on(llm, send_tool_response_to_model=True) # Or register a subset: memory.register_tools_on(llm, tools=["semantic_search", "locate", "get_current_context"])