09 January 2026

Last December, I completed Google’s 5-day intensive Agentic AI course: a detailed intro to current state of Agentic AI development. With the growing shift toward agent-based ecosystems, I wanted to understand the standards and best practices shaping this space. It was highly detailed 5 days of coding and learning exercise, aimed at training to build LLM-based ecosystem.

The Course

The course focused on building complex workflows using LLM agents through Google’s Agent Development Kit (ADK). Key topics included:

  1. Task decomposition: Breaking workflows into smaller, manageable modules and orchestrating multiple agents in sequence, parallel, or hybrid setups for easier maintenance and debugging.
  2. MCP (Multi-Channel Protocol): How agents connect to external systems, similar to APIs.
  3. Context engineering: Techniques to maintain context efficiently over longer periods while minimizing resource usage.
  4. Quality assurance: Testing and validating agent behavior.
  5. Deployment strategies: Moving from development to production environments.

My Understanding of Agentic AI Development

To me, after finishing the course and completing the capstone project, designing workflows felt very chaotic; extremely careful planning is essential to decide what to modularize and what to discard, to have a stable system. Debugging in this paradigm is very different from traditional coding. Building is fast, but debugging, maintenance and onboarding new team members take significantly longer. Prompt selection mattered massively; tiny changes drastically altered outcomes, making version control and evaluation plans critical. Also, its really hard to see what slight change in prompt and in which direction, is going to result in better outcomes. The ADK Web UI looks great but isn’t practical for repeated testing; direct coding and structured evaluation plans work better. The ADK library is evolving rapidly and still early-stage, so close monitoring is needed.

Overall, this felt like a ramped-up prompt-setting exercise rather than full-fledged “engineering.” We’re essentially trying to make a nondeterministic system (LLMs) behave deterministically (using ADK), while still leaving room for autonomous decision-making. It’s like working with an intern; you want them to follow your instructions but also make good judgment calls in gray areas, but only, in a way that aligns with your vision.

For my research domain with computational solutions to CA state’s public health problems, where stability and reproducibility are critical, this technology still feels too dynamic (agents drift weekly while our SAS/R/Python workflows run rock-solid for years). My takeaway: promising, but not production-ready for certain use cases yet. I’ll be watching closely as the industry matures, although the learning was helpful in keeping me connected with new LLM-driven solutions.

Source: https://www.linkedin.com/posts/riddhimanadib_last-december-i-completed-googles-5-day-activity-7415485923003600896-9DQn?utm_source=share&utm_medium=member_desktop&rcm=ACoAABRvn20B1y0ItxnYZnxZTyl-tNonNXLuAgE

Kaggle Certificate: Course Name