Building on Part 2’s foundation of extracting, embedding, and querying running data from Nike NRC, Strava, Garmin, and OpenAI-powered vector search, in this part I have expanded the application to incorporate multimodal data, contextual external knowledge, and agentic tooling for more accurate AI insights.
My architecture now does the following:
Step 2 — Data — including multi-modal records are embedded and stored in the vector database.
Step 3 — User queries are routed through the chat app initiate vector-based similarity searches for pertinent personal data.
Step4 — The OpenAI GPT-5 LLM synthesizes responses, enhanced by injected external knowledge and agentic tool outputs.
Step 5 — Insights and predictions are returned in conversational form, evolving with feedback and new data.
No comments:
Post a Comment