Monitoring Agent Platform

Microservice Engineer

A dual-service microservice architecture designed to ingest agent events and simulate realistic monitoring data using Groq (Llama 3.1). Engineered for seamless containerized deployment on AWS EC2.

30 Events

Window

Docker / EC2

Availability

Groq / TS

Core Stack

The Challenge

Needed a robust way to ingest agent events and monitor them via a Prometheus-style interface. The goal was to build a functional prototype to test LLM-driven event simulations in real-time.

My Approach

Built a dual-service architecture. Service A handles ingestion and metrics, while Service B uses Groq (Llama 3.1) to generate realistic monitoring data. Deployed the entire stack using Docker for easy replication on AWS EC2.

Core Technical Accomplishments

Dual-Service REST API

Built a Node.js API with a rolling memory buffer to track the last 30 events, providing a live Prometheus metrics endpoint for monitoring.

Groq/Llama Simulation

Integrated Groq with Llama 3.1 to dispatch realistic agent events every 60 seconds, replacing static test data with dynamic, production-like payloads.

Docker & AWS EC2

Containerized both services with Docker Compose for a 'one-command' setup on an AWS EC2 instance, including pre-configured security groups for metric access.

Results & Impact

  • Successfully integrated Groq with Llama 3.1 for high-fidelity LLM event simulation
  • Deployed a containerized stack to AWS EC2 with a functional Prometheus metrics endpoint
  • Achieved zero-record loss within the 30-event rolling window tracking logic
  • Implemented a dual-service architecture for clean separation of concerns and scalability

Tech Stack

Node.jsTypeScriptGroq (Llama 3.1)DockerAWS EC2Prometheus