Guide to AI
Aug 7, 2023

Interview: Human-level AI and AI Agents with Josh Albrecht, CTO of Generally Intelligent

An interview with Josh Albrecht, founder of Generally Intelligent and Outset Capital

Interview: Human-level AI and AI Agents with Josh Albrecht, CTO of Generally Intelligent

This interview was originally posted on The Hitchhiker's Guide to AI, an AI newsletter by AJ Asver, CEO and Co-Founder of Parcha.

I’ve been spending a lot of time researching, experimenting and building AI agents at Parcha. That’s why I immediately jumped at the chance to interview AI researcher Josh Albrecht, who is the CTO and co-founder of Generally Intelligent. Generally Intelligent’s work on AI Agents is at the bleeding edge of where AI is headed.

In our conversation, we talk about how Josh defines AGI, how close we are to achieving it, what exactly an AI researcher does, and his company’s work on AI agents. We also hear about Josh’s investment thesis for Outset Capital, the AI venture capital fund he started with his co-founder Kanjun Qui.

Overall it was a really great interview and we covered a lot of ground in a short period of time. If you’re as excited about the potential of AI agents as I am or want to better understand where research is heading in this space, as I am this interview is definitely worth listening to in full.

Here are some of the highlights:

  • Defining AGI: Josh shares his definition of AGI, which he calls Human-level AI, a machine’s ability to perform tasks that require human-like understanding and problem-solving skills. It involves passing a specific set of tests that measure performance in areas like language, vision, reasoning, and decision-making.
  • Generally Intelligent: General Intelligence's goal is to create more general, capable, robust, and safer AI systems. Specifically, they are focused on developing digital agents that can act on your computer, like in your web browser, desktop, and editor. These agents can autonomously complete tasks and run on top of language models like GPT. However, those language models were not created with this use case in mind, making it challenging to build fully functional digital agents.
  • Emergent behavior: Josh believes that the emergent behavior we are seeing in models today can be traced back to training data. For example being able to string together chains of thought could be from transcript of gamers on Twitch.
  • Memory systems: When it comes to memory systems for powerful agents, there are a few key things to consider. First of all, what do you want to store and what aspects do you want to pay attention to when you're recalling things? Josh’s view is that while it might seem like a daunting task, it turns out that this isn't actually a crazy hard problem given that we know how to do this already for non AI systems.
  • Reducing latency: One way to get around the current latency when interacting with LLMs that are following chains of thought with agentic behavior is to change user expectations. Make the agent continuously communicate updates to the user for example vs. just waiting for to provide the answer. For example, the agent could send updates during the process, saying something like "I'm working on it, I'll let you know when I have an update." This can make the user feel more reassured that the agent is working on the task, even if it's taking some time.
  • Parallelizing chain of thought: Josh believes we can parallelize more of the work done by agents in chain of thought processes, asking many questions at once and then combining them to reach a final output for the user.
  • AI research day-to-day: Josh shared that much of the work he does as an AI researcher is not that different from other software engineering tasks. There’s a lot of writing code, waiting to run it and then dealing with bugs. It’s still a lot faster than research in the physical sciences where you have to wait for cells to grow for example!
  • Acceleration vs deceleration: Josh shared his viewpoints for both sides of the argument for accelerating vs decelerating AI. He also believes there are fundamental limits to how fast AI can be developed today and this could change a lot in 10 years as processing speeds continue to improve.
  • AI regulation: We discussed how it’s challenging to regulate AI due to the open-source ecosystem but that some form of regulation is probably required.
  • Universal unemployment: Josh shared his concerns that we need to get ahead of educating people on the potential societal impact of AI and how it could lead to “universal unemployment”.
  • Investing in AI startups: Josh shared Outset Capital’s investment thesis and how it’s difficult to predict what moats will be most important in the future.

Episode Links:

The Hitchhiker’s Guide to AI:

Generally Intelligent:

Josh on Twitter:

Episode Content:

00:00 Intro

01:42 What is AGI?

04:40 When will we know that we have AGI?

05:40 Self-driving cars vs AGI

07:10 Generally Intelligent's research on agents

09:51 Emergent Behaviour

11:07 Beyond language models

13:17 Memory and vector databases

15:25 Latency when interacting with agents

17:13 Interacting with agents like we interact with people

19:08 Chaing of thought

19:44 What do AI researchers do?

21:44 Accelerations vs. Deceleration of AI

24:05 LLMs as natural language-based CPUs

24:56 Regulating AI

27:31 Universal unemployment

AJ Asver

AJ Asver

Co-founder & CEO

About the author.

Lorem ipsum

Related articles.