Building agentic AI workflows often requires multiple moving parts: memory management, document retrieval, vector similarity, and orchestration.
Until now, these pieces had to be custom-wired.
But with the new native n8n nodes for MongoDB Atlas, we reduce that overhead dramatically.
With just a few clicks:
Store and recall long-term memory from MongoDB
Query vector embeddings stored in Atlas Vector Search
Use these results in your LLM chains and automation logic
In this example we present an ingestion and AI Agent flows that focus around Travel Planning. The different interest points that we want the agent to know about can be ingested into the vector store.
The AI Agent will use the vector store tool to get relevant context about those points of interest if it needs to.
There are 2 main flows.
title and description into points_of_interest collection.embeddingChat Message Trigger : Chatting with the AI Agent will trigger the conversation store in the MongoDB Chat Memory node.
When data is necessary like a location search or details it will go to the "Vector Search" tool.
Vector Search Tool - uses Atlas Vector Search index created on the points_of_interest collection:
// index name : "vector_index" // If you change an embedding provider make sure the numDimensions correspond to the model. { "fields": [ { "type": "vector", "path": "embedding", "numDimensions": 1536, "similarity": "cosine" } ] }
Additional Resources


