stream Featured GPT-3.5 and GPT-4 response times Some of the LLM apps we've been experimenting with have been extremely slow, so we asked ourselves: what do GPT APIs' response times depend on? It turns out that response time mostly depends on the number of output tokens generated by the model. Why? Because LLM latency
stream How data is used for LLM programming Software 1.0 -- the non-AI, non-ML sort -- extensively uses testing to validate things work. These tests are basically hand-written rules and assertion. For example, a regular expression can be easily tested with strings that should and should not give a match. In software 2.0, and specifically supervised
stream Hacky multimodality GPT-4 supports images as an optional input, according to OpenAI's press release. As far as I can tell, only one company has access. Which makes you wonder: how can you get multimodality support already today? There are basically two ways for adding image support to an LLM: 1.
stream First thoughts on AI moratorium Context: first thoughts on Pause Giant AI experiments. I will refine my thinking over time. * I had not thought about AI safety much since ~2017, after thinking a lot about it in 2014-2017. In 2017, I defended my MSc thesis on an AI-safety-inspired topic (though very narrow and technical in
stream Agents are self-altering algorithms Chain-of-thought reasoning is surprisingly powerful when combined with tools. It feels like a natural programming pattern of LLMs: thinking by writing. And it's easy to see the analogy to humans: verbalizing your thoughts in written (journalling) or spoken ("talking things through") form is a good way
stream Index to reduce context limitations There is a very simple, standardized way of solving the problem of too small GPT context windows. This is what to do when the context window gets full: Indexing: 1. Chunk up your context (book text, documents, messages, whatever). 2. Put each chunk through the text-embedding-ada-002 embedding model, and store
stream Langchain frustration I was building an agent with langchain today, and was very frustrated with the developer experience. It felt hard to use, hard to get things right, and overall I was confused a lot while developing (for no good reason). Why is it so hard to use? Here are some of
stream Office 365 copilot is extremely impressive Microsoft recently held an event where they announced the "Office 365 Copilot". It was extremely impressive to me, to the extent that (when this launches) I will consider switching from Google's work suite to Microsoft's. Why is this announcement so impressive? In one word,
stream Your non-AI moat "What's the Moat of your AI company?" That seems to be top of mind for founders pitching their novel idea to VCs -- and the most common question they get. But I think it might not be the right question. Right now, nobody seems to have
stream GPT-4, race, and applications GPT-4 came out yesterday and overshadowed announcements, each of which would have been bombshell news otherwise: * Anthropic AI announcing their ChatGPT-like API -- likely the strongest competitor to OpenAI today (only waitlist) * Google announcing PaLM API (currently it's just marketing -- no public access yet) * Adept AI announcing
stream Alpaca is not as cheap as you think "Alpaca is just $100 and competitive with InstructGPT" -- takes like this are going around Twitter, adding to the (generally justified!) hype around AI models. It is indeed a very encouraging result. Specifically, that it took so little compute to train something that achieves competitive results on benchmarks
stream Chain-of-thought reasoning Matt Rickard has a concise overview of Chain-of-thought: the design pattern of having an LLM think step by step. To summarize, the four mentioned approaches from simpler to more nuanced are: 1. Add "Let's think step-by-step" to the prompt. 2. Produce multiple solutions, have the LLM
stream Define a vector space with words I've been experimenting with embedding ideas into GPT-space, and using the resulting vectors to visualize. For example, you could plot different activities based on two axes: how safe they are, and how adrenaline-inducing they are. How does it work? The intuition is the following. In words, you can
stream AI product overhang There is a massive AI overhang today. The term is analogous to snow overhang. When snow slides down a roof, sometimes a section goes over the edge and is seemingly supported by nothing. You know it will eventually fall; the tension is in the air. It's a question
stream Self-driving apps There's a now-famous classification of different levels of self-driving: * Level 0: No automation * Level 1: Driver assistance * Level 2: Partial automation * Level 3: Conditional automation (human fallback always available) * Level 4: High automation (can safely pull over) * Level 5: Full automation The full definition is more detailed but
stream Text is the interface, not the use case Probably 80% of (mainstream, non-ML) Twitter is excited about the potential of GPT and ChatGPT. The other 20% is skeptical. Few are indifferent. But both the optimists and pessimists seem to focus on the obvious set of use cases: text generation and chatbots. That is a failure of imagination. Text
stream Prototyping with GPT API is fun I've built around 10 prototypes with GPT over the past couple weeks and it's surprisingly fun. Why is that? Obviously it's a shiny new toy. Whenever I discover a new tech capability, I get excited. I recall discovering D3.js, Bokeh, Keras... These appeal
stream How do you decompose knowledge work for AI I hacked a bit on an AI experiment: idea generator. In general, working on these AI apps makes you think a lot about the work process and unique value add of Knowledge workers in general, and myself specifically. How predictable is it? Which parts are easy to automate; which are
stream AI shovels don't beat AI apps NFTPort sells tools in web3. I've heard people say that this is a smart approach in a volatile market, because selling shovels in a gold rush is safer than looking for gold yourself. That seems wrong. Sure, during the rush your revenue is more stable, but you still