Well, by this point we have over 1000 lines in our markdown file. This one is mostly for fun.
If you've been waiting for an introduction to humanlayer, then this is it. If you are practicing Element 6 - Start/Pause/Resume via a simple API and Element 7 - Contacting humans via tool calls, then you are ready to integrate this element. Allow the user to start/pause/resume from s...
Instead of building monolithic intelligences that try to do everything, it is better to build small, focused intelligences that can do one thing well. Intelligentsia are just one building block in a larger, largely deterministic system. The key insight here is the limitation of the big language model: the larger and more complex the task, the...
This is a small point, but worth mentioning. One of the benefits of an agent is "self-healing" - for short tasks, a large language model (LLM) may call a failed tool. There is a good chance that a good LLM will be able to read an error message or stack trace and...
If you are in control of your own control flow, you can implement many interesting features. Build custom control structures that fit your particular use case. Specifically, certain types of tool calls might be a reason to jump out of a loop, wait for a human to respond, or wait for another long-running task (e.g., a training pipeline)...
By default, the Large Language Model (LLM) API relies on a fundamentally high-stakes Token choice: do we return plain text content, or do we return structured data? You put a lot of weight on the first Token choice, which in the case of the weather in tokyo...
Intelligences are programs, and we expect to be able to start, query, resume, and stop them in some way. Users, applications, pipelines, and other intelligences should be able to easily start an intelligence with a simple API. When long-running operations need to be performed, intelligences and their orchestration deterministic code...
Even outside of the AI space, many infrastructure systems try to separate the "execution state" from the "business state". For AI applications, this can involve complex abstractions to keep track of information such as the current step, next step, wait state, number of retries, and so on. This separation introduces complexity, and while it may be worthwhile,...
The tool need not be complex. At its core, it's just structured output from your Large Language Model (LLM) for triggering deterministic code. For example, suppose you have two tools CreateIssue and SearchIssues. asking a Large Language Model (LLM) to "use one of the multiple tools" is really asking it to output...
You don't have to use a standardized, message-based format to deliver context to the big language model. At any given moment, your input to the big language model in the AI intelligence is, "Here's everything that's happened so far, and here's what to do next." It's all contextual engineering. The big language model is...
Don't outsource your input prompt engineering to a framework. By the way, this is far from being novel advice: some frameworks provide a "black box" approach like this: agent = Agent( role="..." , goal="..." , personality="..." , tools=...
One of the most common patterns when building intelligences is converting natural language into structured tool calls. This is a powerful pattern that allows you to build intelligences that can reason about tasks and execute them. This pattern, when applied atomically, is to take a phrase (e.g.) that you can use for Ter...
Detailed version: how we got here You don't have to listen to me Whether you're new to intelligences or a grumpy veteran like me, I'm going to try to convince you to ditch most of your pre-existing views on AI intelligences, take a step back, and rethink them from first principles. (As...
A Comprehensive Introduction "12-Factor Agents" is not a specific software library or framework, but a set of design principles for building reliable, scalable and easy-to-maintain LLM (Large Language Model) applications. The project was started by developer Dex, who realized that many teams were using existing AI wise .....
FineTuningLLMs is a GitHub repository created by author dvgodoy, based on his book A Hands-On Guide to Fine-Tuning LLMs with PyTorch and Hugging Face. This repository...
Large Language Modeling (LLM) technology is changing rapidly, and the open source community is churning out a wealth of valuable learning resources. These projects are a treasure trove of practices for developers who want to master LLM systematically. In this article, we'll take an in-depth look at nine of the top open-source projects on GitHub that are widely acclaimed, not...
Introduction This course will cover: How to effectively plan for the deployment of AI Agent to a production environment. Common mistakes and problems you may encounter when deploying AI Agent to a production environment. How to manage costs while maintaining AI Agent performance. Learning Objectives After completing this course, you will know...
Introduction Welcome to the course on Metacognition in AI Agent! This chapter is designed for beginners interested in how AI Agents think about their own thought processes. By the end of this course, you will understand key concepts and have practical examples of applying metacognition in AI Agent design. Learning Objectives .....
When you start working on a project that involves multiple intelligences, you need to consider the Multi-Intelligence Design Pattern. However, it may not be obvious when to move to multi-intelligentsia and what the advantages are. Introduction In this course, Microsoft attempts to answer the following questions: What scenarios are suitable for multi-intelligentsia? What scenarios are suitable for multi-intelligence?