← Back
Podcast Transcript

OpenAI Engineer Doug Li: Yes, We Also Use ChatGPT

  • Engineering
Image of Doug Li in a presence facepile next to the title "Building Collaboration"

Jackson Gabbard: Tony Stark had J.A.R.V.I.S., who he could just give broad instructions to and J.A.R.V.I.S., in a very polite, posh, British accent will tell him how it’s gonna go, whether or not it will work. Doubts him at times. Says, ‘uh, sir, is that, are you sure that’s what you wanted?’

Do you get your own J.A.R.V.I.S. when you become a software engineer at OpenAI?

Doug Li: No 😂 No, we don’t. We’re nowhere near having our own J.A.R.V.I.S.’s. If we did, that’s certainly something that, you know, we’d like to be able to share with the rest of the world. But also we wanna make sure that when we, when we’re doing this, it’s safe. Um, but no, we don’t have our personal J.A.R.V.I.S.’s. That would be pretty cool.

Jackson Gabbard: I’m, I’m devastated, Doug. I gotta be honest with you, I’m a bit devastated by this news.

Hello world! I’m Jackson Gabbard, the CTO and Co-Founder of cord.com. Today we’re gonna talk AI, large language models, privacy, and the very future of software. With me on this journey is the one and only Doug Li.

Doug Li: Hi. My name is Doug. I’m an engineer at OpenAI in our London office, our brand new London office. Thank you for having me, Jackson.

Jackson Gabbard: It is so great to have you here. So for the listeners at home, Doug has kindly agreed to talk with us about whatever parts of his work he can share. But Doug’s story is far more interesting than just OpenAI. To set the stage, Doug has been a software developer professionally since I think 2009?

Doug Li: Yeah, that’s right. 2009, 2010. Yeah.

Jackson Gabbard: And before that you were actually an artificial intelligence researcher. Like, before it was cool.

Doug Li: I think that’s a bit of a stretch, but I did dabble a little bit when I was in college. I basically just kind of, you know, constantly bugged my professors when I was an undergrad about, ‘Hey, like I’m trying to implement my own neural network can you help me out here?’.

And eventually they got tired of my emails and they finally responded. So, yeah. But I wouldn’t consider myself an AI researcher back then.

Jackson Gabbard: Okay. I mean it, that sounds like AI research to me. But you know, you call as you see it. So, after university, you joined Facebook. Well, I think you did another startup, but then you joined Facebook. So this is, we’re talking 2010.

This is early Facebook, way before it exploded. Way before the IPO. And back then, I believe you worked on like, the privacy model and on Facebook messages?

Doug Li: That’s right. I worked on privacy. I worked on messages. I worked on a bit of profile on a very, very defunct version of profile by now. Um, and yeah, I had a lot of fun during that time.

Jackson Gabbard: It was a pretty magical time. I was also at Facebook during these years. I was working on completely different stuff to that. But those were like, flagship products. Those were, sorry, projects, flagship projects within Facebook, like, what it was then and what it became later. We’re gonna circle back to this ‘cause there’s some important context there.

There was one thing as I was reviewing your background that stuck out to me. Your CV includes the detail that you executed the largest privacy migration in human history. I gotta say, that’s an accolade that few people can boast.

Doug Li: Fair enough. I think that’s how we were thinking about it within the team anyway, when I was still back at Facebook.

But yeah, like basically at that point in time, 2010, 2011, Facebook was really getting torn apart by its privacy. I mean, it still kind of is today, but like it was actually far worse back then. Like, people couldn’t control who they were sharing things to. There was a gazillion buttons everywhere.

And so we really wanted to simplify the model, and that was certainly what we told ourselves in terms of what we’re doing. But also, yeah, it was a pretty complicated change and I’m quite proud of it.

Jackson Gabbard: I would love to circle back to this and talk about it in more detail. Um, but just to round out your story. So you left Facebook, you joined a startup, and then I think we talked about this once before, you and I, but you sort of semi-randomly ended up back at Facebook again in 2015. Um, and then as far as I know, you worked on things, like an ads product, but also on VR.

Doug Li: That’s right. I’ve always been curious about the potential for virtual reality in terms of bringing people together.

I did work on ads for a couple of years. I didn’t particularly enjoy it. I felt like, you know, my heart is kind of in product and actually serving people and making people happy, and I felt a bit disconnected from that while I was working on ads. And so when I went back to working on VR products that was just like, incredible joy to be able to go back to leading edge, zero to one, and building something that hopefully makes people happy.

Jackson Gabbard: I can relate to that. I never worked on ads, but I can relate to that feeling of, when the work is fulfilling, when the work is connected to humanity. In 2022, you left Facebook and you joined this AI research startup called Shift Lab.

And there, I believe you were a full stack engineer, but doing like literally the full stack of what it takes to deliver machine learning AI-powered. I think in your case, it was computer vision work?

Doug Li: Partially, yeah. We had this idea for, hey, like if you can have AI just help people generate an entire brand.

There are a lot of individual people trying to run their own small businesses and, you know, branding is a very hard thing. And so we were thinking, cool, can people just tell us what are the products that they’re trying to build so that they can focus on building and perfecting their product and we’ll make a brand for them, make a website, make a storefront. Do all of the boring stuff so that people can focus on just building really good products.

Jackson Gabbard: I feel like there are right now somewhere between 50 and 5,000 tiny startups spinning up using OpenAI libraries using OpenAI, LLM completions and things to do exactly this kind of work. I, I feel like you were ahead of the curve on that one.

Doug Li: Yeah, it was quite fun, for sure. Yeah.

Jackson Gabbard: As you look across all of the different types of work you’ve done, it almost feels like a perfect stage setting for your current work. So, maybe you can complete the picture for us. You’ve mentioned you’re an engineer at OpenAI. Can you tell us what you do there?

Doug Li: Yeah. So I basically, you know, we have this step, called post-training, which is where we apply reinforcement learning with human feedback, to kind of give chatGPT, its ability to follow instructions, its ability to refuse dangerous prompts.

It’s like very, very rough abilities of, you know, reasoning or solving problems that it otherwise wouldn’t be able to. And I work, actually, on a data platform that allows us to feed this data into post-training so that us as humans, we’re able to steer these models as opposed to potentially these models either generating just like garbage or unhelpful or unsafe output.

Jackson Gabbard: I’m guessing you’re sort of building a platform for other engineers to get data back into the model for the reinforcement learning?

Doug Li: Yeah. Research engineers, you know, when we run post-training basically we always more or less always need to run it against this data that we’re collecting to give chatGPT’s personality and so on. And so, yeah. We need a way of collecting that data. We need a way of storing it, and then we need a way of feeding it so that, you know, it’s available when we actually run the, run the training step.

Jackson Gabbard: So for folks listening who are like me. I’m bit ‘AI, dumb’. I sort of know what I’ve seen on YouTube videos and read in blog posts about it, but I don’t really know it end to end. What does it mean to train a model? What is is pre-training and, and post-training? I’ve no idea.

Doug Li: I think it really depends on the types of models that you’re trying to train.

But I think when we’re talking about, you know, large language models, like what OpenAI has, there’s multiple steps to be able to get, you know, a model like this. I think the first step is you just train on lots and lots of data. So, that it’s just able to predict the next word in a sentence or a piece of text.

You give it a piece of text and then it just tries to constantly try to predict the next word. But like, that’s not really an assistant. It’s not really something that can help you. Yeah, sure. You have autocomplete on steroids, but people don’t want autocomplete on steroids. We’ve had autocomplete for years, right?

What people want is something that is able to listen to instructions, follow what people say, and hopefully be helpful while at the same time being safe. And so to be able to do that, we actually go through a bunch of additional steps on top of that. You can check out our blog posts. We’ve shared this stuff about one or two years back already.

Um, but basically there’s a couple of steps. So the first step is, you know, after we have this model that’s just been trained on lots and lots of texts, we actually show example conversations, right? And these example conversations are, you know, they’re generated by, you know, people, people like you and me that actually show, hey, here’s how a conversation between a human and an AI assistant might look like.

And we show it a bunch of these conversations and we run something called supervised fine-tuning on it. And so here, it’s just basically trying to predict the next piece of text, but instead of predicting random pieces of text scattered all across books and the internet and so on, now it’s trying to predict the next text in a conversation where that conversation is between a human and an AI assistant.

Jackson Gabbard: It sounds like you’re saying there are simulated conversations between an AI assistant and a human, but in the simulation, it’s actually a human pretending to be an AI assistant saying what they think an AI assistant would say. So that there is a sort of a, a record of how an AI assistant is supposed to behave.

Is that how it works?

Doug Li: Pretty much. So it goes through supervised fine-tuning with parts of these conversations. And then afterwards, you know, that in and of itself isn’t sufficient to have a very, very highly performing agent. It also goes through two other steps. So one step is after it’s gone through supervised fine-tuning, we actually have humans and people. So this is kind of where the human feedback comes in. People actually go in and rate these responses, right?

So if I were to ask, you know, an AI a simple question like, ‘What is one plus one’, right? It might generate a bunch of answers and I can, you know, some, some of those answers might be longer, some of those might be shorter.

Some of those could be super direct. And then we have people rate those, right? So we’ll get a bunch of answers, and then people give a rating on every single answer. Is answer A better than answer B, is answer B better than C, and so on. And once we have these comparisons, we can then train what we call the reward model on it.

And the reward model is, now that we have a rough ranking of what is a good answer and what’s a bad answer for various conversations, train a model to figure out if that model itself can decide if an answer is good or not.

Jackson Gabbard: Absolutely mind blowing if I can repeat this back and have correctly synthesized the input you’ve given me.

You carefully establish manual, human-run ranking, and then you say, okay, great. Let me take a new model and see if that model can produce the human rankings.

Doug Li: That’s right. And so this is where we try to align the model against human feedback. And so we now have this model called the reward model.

And so when you run an output against the reward model, you get some sort of score, right? And then the final step of training is what we call the kind of PPO, which is one of the algorithms that OpenAI has developed a couple years ago. Um, but we run a reinforcement learning algorithm with a model trained against the reward model where the model is just generating completions, and the reward model will score them.

And over time, what we want is the model to be generating completions and responses that the reward model will score highly because that reward model, to some extent, captures what we as humans want.

Jackson Gabbard: You mentioned the PPO algorithm. What is that?

Doug Li: It’s called Proximal Policy Optimization. Um it falls in a class of algorithms.I’m not an ML engineer and so like, take everything with a massive grain of salt here. I’m a standard software engineer. Um, but I think it falls into the class of algorithms called Actor-Critic algorithms where um, basically the model constantly tries to figure out, hey, what is the best action to do for a certain state as well as predict what the value, what the expected future reward, of that function of those actions are going to be in a certain state.

It gets pretty deep into kind of RL territory, so I’d be happy to cover it if you want to, but at the same time, I dunno how happy your listeners might be in terms of jumping into like, Pretty deep PPO land.

Jackson Gabbard: This is a great point. And I don’t know. I’ll tell you what, we’ve got a lot to cover. Let’s go breadth wise first. And if there’s a particular topic that shines, we can, we can circle back to it. And also, after we release this, if folks spew in comments saying, ‘Oh my god, I can’t believe you left off on the PPO topic! That was the thing we wanted to hear!’ Maybe we can invite you back. We’ll buy you a nice dinner and you can come back and sit with us again for another another interview.

Um, I do have more topics for us to dive into. So, you know, you’ve described the nature of training a model, the sort of stages of it. I wanna bring it back to really, really concrete stuff, like you mentioned, being the person building the platform for the post-training, if I heard you correctly.

Doug Li: I basically have built a platform where we gather data from humans, about how we would like the model to behave, and then we feed that into post-training, yes.

Jackson Gabbard: You know, a very basic, silly question, but it’s so engineer-y and interesting. If you can talk about it, like what was the last thing you committed to the codebase?

Doug Li: It was a performance improvement. Like, just like some of our endpoints were just running very, very, very slowly and internally, people were complaining about, ‘Hey, I’m trying to use this endpoint, but it’s slow’.

And so I spent some time looking into the problem and eventually it ended up being like, a three line change, plus like 80 lines of additional unit tests stacked on top of it. But really, that change took me like more or less half a day to a day just to figure out, cool, this is the exact source of a problem.

And it turned out to be a tiny change, but yeah, hopefully very impactful.

Jackson Gabbard: Was this some sort of like, misconfiguration or some accidental n^2 algorithm? Or, something like that?

Doug Li: No, no. We were using an underlying library and like the underlying library was doing some stuff with JSON serialization that we like, didn’t expect.

But I honestly did not expect to be slow because, you know, typically, JSON serialization is a pretty solved problem. But it turns out that when I literally like, don’t use the default JSON serializer and replace it with, you know, my own JSON serializer, it’s just much faster. It’s like 60% faster.

Jackson Gabbard: Well, you said it was a three line change.

Doug Li: Yes.

Jackson Gabbard: You can’t write a JSON serializer in three lines 😯

Doug Li: That’s right. I literally just replaced whatever it was doing with pre-built JSON library stuff that was just widely, straight up available in Python. And it just like, was so much faster.

Jackson Gabbard: So I guess this is like a, the, the dangers of using open source software. You don’t know sometimes if it’s gonna be performant or not.

Doug Li: Well, you know, I wouldn’t call it the dangers, but a lot of the time, you’re not actually going to run into these issues, right? Most of the time, open source software is extremely, extremely good. It’s made for, it’s made to, like, one of the benefits of being open source is you get tons and tons of feedback when people actually use your thing and people come into all sorts of weird edge cases. And, you know, this piece of software that we’re using, my theory is it’s doing a lot more work because, other people have complained about these edge cases not working for them, right?

And so it’s probably doing a bunch of validation beforehand. It’s doing a bunch of unnecessary checks because look, you know, in our code path, we don’t need to do a bunch of these things. And so simply overriding that helps. But with open source software, you know, the initial state, it’s something that just by default works and you know, honestly, I’m pretty happy with it. But yes, there are times where you need to dig a bit deeper when it really matters for performance or whatever to fix things.

Jackson Gabbard: There’s a burning question I have for you. You’re one of few people in the world who has the opportunity to earnestly speak about this.

You were at Facebook pretty early, and now you’re at openAI. I’m curious what to you is the difference in working at one of these companies versus the other? Like what’s the, as an employee, as an engineer, as a, you know, just a member of the company culture. Like what feels different?

Doug Li: You know, early stage Facebook um, and where OpenAI is currently feels quite similar in my experience. So when I worked at Facebook in 2010, we had roughly, I think 1,200, 1,300 people or so. Um, about a third of those people were engineers. I think OpenAI probably slightly skews more towards engineering research, it’s pretty heavy research.

But, you know, we’re looking at hundreds of employees still. And it’s a, in both cases, both of these organizations were going through periods of extreme growth where, hey, the entire world was trying to use these companies’ products. Um, and I think that’s the same case with where OpenAI is today.

Um, it’s extremely fun. I would say like you get into these small teams and there’s so much really, really important stuff to do. And there’s never enough people to do all of these things. But also like, people work really, really hard ‘cause people care. Like I remember early on at Facebook, we would have these hackathons occasionally, and like, I think these days, hackathons are really structured.

You show up to an event and you know, there’s catering and all of these things going on, and like, people probably like, potentially a company sponsor or whatever, depending on where your hackathon is. But back then it was just literally, cool! People just put out a bunch of drinks, buy some pizzas, and then, you know, you’re free to stay in the office and work 24 hours throughout the night.

And a lot of people did that, right? Um, and I think there’s a little bit of that culture in OpenAI where everybody’s just trying to get a lot of stuff done. People gravitate to solutions that work. That work very, very quickly. Um, and ultimately it just solves the problem without, without a lot of the cruft that eventually once you, once you become larger, like planning gets a lot harder, coordination between thousands and thousands or tens of thousands of people, like in Meta’s case, gets very, very hard.

And so you’re spending a lot more effort trying to figure out, cool, how do we not step on top of each other’s toes and like, have the entire company just work itself in the same direction. Like I think that’s what large companies, very large companies are about. But like where OpenAI is right now, like there’s very, very little of that and everybody’s just very focused on the work.

Which is the same as like, my early experience with Meta.

Jackson Gabbard: In today’s culture I feel like hackathons have something like a dirty, a dirty word status. ‘Cause it’s like, ‘You want me to work 24 hours for free!? Like what? Like, I’m not a, I wanna work life balance!’. But I think that perspective that I hear sometimes sort of misses the sense of camaraderie, the sense of motivation, of like positive motivation of positive intrinsic reward from achieving a lot of important work very, very quickly.

As you described that I’m, I’m remembering some of those early days and, and the, yeah, we worked 24 hours and it was frigging awesome! And I would do it again in a heartbeat if I wasn’t old and tired!

Doug Li: Yeah. I think age does come into it a bit. Like, I cannot be just pulling 24 hour hackathons all the time. It’s just, it’s just not. Doesn’t work. I have a family now. I need, I like, I care about, you know, the people that I love and I wanna spend time with them, too.

But at the same time, I think camaraderie is the key word here. Like, when you find the right team and you really enjoy what you’re building and what you’re working on, then having that sort of freedom to spend time with your team, with other people who care about exactly the same problem as you do and being able to solve that problem. Or say, look, let’s just see what we can do in a day, right?

There’s a lot of, I think, fun and energy and excitement that goes into that.

Jackson Gabbard: 100%. Let’s talk a little bit more about OpenAI and what it’s like to be an engineer there.

What’s something that’s different about how you develop software at OpenAI versus how, you know, the non-AI powered companies do it?

Doug Li: I think model training is one, obviously. You know, you need to train these models, and that’s part of the development process, right? Like when you are fundamentally trying to work with AI models, a lot of times your output is in code, right?

The model itself, the weights, making sure that it’s performing well. Um, all the different training runs and training infrastructure. There’s a lot of that to make sure that everybody, like people, researchers in the company, can do that quickly and get results quickly.

There’s also a lot of science to it, which I’m not, again, I’m not an ML engineer or scientist, and so it’s pretty hard for me to go into that detail.

But I’d say that you do need to consider that when you, like normally you would just deploy code and now we are deploying models plus code. And so like, there’s, sure, there’s, there’s a little bit more involvement in terms of understanding code. What are you deploying and so on. But yeah, I don’t know if that answers the question.

Jackson Gabbard: I guess where I’m going with this is maybe a little bit more the sci-fi end of it. You know, on the extreme end. Tony Stark had J.A.R.V.I.S., who he could just give broad instructions to, and J.A.R.V.I.S., in a very polite, posh, British accent, will tell him how it’s gonna go, whether or not it will work.

Doubts him at times. Says, ‘Uh, sir, is that, are you sure that’s what you wanted?’. Uh, do you get your own J.A.R.V.I.S. when you become a software engineer at OpenAI?

Doug Li: No. No, we don’t.

I think we’re nowhere near having our own J.A.R.V.I.S.’s. And I think if we, if we did, that’s certainly something that, you know, we’d like to be able to share with the rest of the world. But also we wanna make sure that when we, when we’re doing this, it’s safe.

Um, but no. We don’t have our personal J.A.R.V.I.S.’s. That would be pretty cool.

Jackson Gabbard: I’m, I’m devastated, Doug. I gotta be honest with you, I’m a bit devastated by this news.

Doug Li: Um, yeah. We, we do try to use AI where possible. Um, like I, I personally, I’ve used GitHub Copilot very, very heavily. It just saves me a ton of time. Like, if there’s one thing that, you know, people can potentially take away from this podcast it’s just like, use AI tools. Make AI be a part of your life. Understand where it’s good and where it’s bad. And for me, yeah, Copilot isn’t perfect. There are many cases where it just like, hallucinates, or just generates stuff that I don’t really want. But, a lot of cases where it’s also extremely good and it saves me a ton of time.

I think especially with writing tests, I found like it’s just very, very good at that. Um, and so yeah, AI tools save me time and we certainly have that at OpenAI obviously.

Jackson Gabbard: I’m actually really surprised that you folks don’t have your own proprietary internal code interpreter-driven Copilot variant.

Doug Li: My developer tool of choice is Visual Studio. Like previously, I used to be super hardcore Emax. In fact, I still switch over to Emax whenever there’s something that’s like, Visual Studio just struggles with or if I just know that I can get it done faster in Emax, but most of the time I’m using Visual Studio.

And like, when you’re thinking about AI products and integrating AI, like, it’s not just about, hey, cool, how good is the model? But also what is the product experience like for people? And losing Visual Studio in and of itself would pretty much like be a non-starter for me if I were to use an external/different model which obviously I can neither confirm nor deny.

But like the amazing thing about Visual Studio and GitHub Copilot is just the integration I honestly find is just very, very easy for me to use. And so yeah, you want your tools to almost be an extension of your arms, your hands, the way you think.

And a lot of work I think goes into that, that like, a lot of people are very, very focused on cool, AI can do magical things part. But without really thinking of how does this actually integrate into what they’re building in a very deep way.

Jackson Gabbard: What you’ve said now, actually, it connects with something we spoke about the last time we caught up.

I was asking you about ways you’ve incorporated AI into your workflow and like, things you’ve built for yourself potentially that would surprise other engineers… that other engineers might find interesting or useful. Would you like to talk a little bit about how you’ve used AI in your engineering workflow?

Doug Li: I actually wrote a tiny little shell script where I can just call out directly to GPT from my terminal prompt. Because like, one of the things I often forget is when I’m trying to compress, like create a tarball, what are the four flags that I always have to pass into to be able to compress or decompress something and, like, I just don’t wanna go and search all the time just to repeat that same flow and come up with the same answer.

And so in that case, I just like use GPT in my terminal. Uh, so I say, ‘Hey, ChatGPT, like what, how can I do this?’ Right? ‘I have this file, how can I compress it?’ Or whatever. And then it returns me this shell command to run. And if it hit enter, it just runs that command right in my shell. And so like, like that’s something random that I just hacked it myself. Which, by the way, I used GPT chat, like GPT4, to build that tool in and of itself. And so like, I’m just like using AI to build the AI tool that I want. Um, but yeah, I guess that’s another example.

Jackson Gabbard: I think this is amazing. You remind me of that somewhat well-known xkcd comic where the the cartoon person has to save people from, you know, being blown up. I think that’s the premise. And they say you can save them. You just have to uncompress this tarball. And the final cell of the comic is the, is the engineer saying, ‘I’m so sorry’.

You’ve AI’d your way out of that.

Doug Li: Pretty much!

Jackson Gabbard: So we mentioned before that we would cut back to something from your CV about the largest privacy migration in human history. I love this term. If we could cut back to that now, can you set the stage for us? What does it mean?

What was happening at that point in your career?

Doug Li: Yeah, so at that time, I think it was 2011. Or late 2010. Facebook had about 50 to, I think 70 privacy settings. Every setting governs something different. You had a setting for photos, you had a setting for posts. You had a setting for who can see your friend list or people that you friended.

Um, you had a setting for your birthday and all, all types of information had its own individual setting, buried beneath three to four different pages of UI settings. And they were all, all over the place. Uh, and that’s not, like, that doesn’t even go into the advertising targeting part of, ‘Hey, like you know, is Facebook allowed to use my data to you know, serve ads and so on?’

So there was just like so many settings. And I remember this article by probably the Wall Street Journal uh, or Washington Post where they literally just wrote out all of the different settings that we had and here’s how to use them. And I was shocked. Like it was just so bad.

Fundamentally, as a product, nobody wants to remember all of this stuff. There needs to be an easier way to have control over, cool, like, this is my information, this is where, what I should be sharing. And so we had a PM come. I think he was Sam Lessin.

So Sam Lessin, he basically said, look, you know, instead of doing all of these settings, we really should have just inline controls for everything. So next to your birthday or next to your, you know, especially when you’re creating a post, every post should have its own privacy setting.

Uh, and that became what we call the privacy selector today, which is when you’re creating a post or attaching photos to it or whatever, there’s this little dropdown that says, I can share it to everyone, I can share it to friends, or I can share it to only me.

Uh, and yes, there was a bunch of extra stuff. You could define your custom friend lists or whatever, or like super close friends and stuff. But largely the idea is, when you’re creating the content, show the privacy control next to the content. And if you have bits of info spread out across your profile, show that privacy icon next to that piece of info.

Very, very simple concept. But like before that, all of this stuff was just buried between, you know, like tons and tons of various settings, pages, and so on. And so lots and lots of backend changes were actually necessary to be able to make this change because previously, stuff didn’t really have object-based privacy.

And so there was this big move to, okay, every single thing, like your, your post, your photo, whatever, everything by itself is privacy-aware. Um, and so we basically made tons of these changes both in the backend and the frontend, and then we shipped it.

And I think one of the things is, after we shipped it, there wasn’t a lot of, like, I think Facebook constantly just gets backlash about privacy, but like, I was pretty happy when we shifted that, hey, we actually didn’t get any backlash at all.

People found it much easier to use and stuff just made sense. So instead of 70 different settings buried behind pages, now you have probably like, one or two settings pages that had like three or four settings each for like very, very top-level, obvious basic stuff. And then everything else was just inline.

Jackson Gabbard: Now that you pointed out, I have a hard time even remembering what life was like before you had inline privacy controls. It sort of seems crazy to not have inline privacy controls.

Doug Li: That’s right. And I think this was like, honestly, just that concept I don’t think really existed beforehand. I think it was when Facebook came on pretty much to the world and everybody started using Facebook and started posting everything online, that’s when this need actually happened of, oh, okay, cool. I need to be aware of my own privacy now. I need to be aware of what it is that I’m posting and who the people are who’s seeing it.

Because before, before Facebook, at that time, the internet was pretty much just this sort of, you know, bad, dangerous place where you have to be constantly careful of everything. Never share your real identity.

Um, and that’s very clearly changed today, right? People very readily share who they are online and so on. And so I think that definitely played a role. That privacy migration definitely played a role of people getting more comfortable with the concept of when you share a thing, you get to control who gets to see it.

Jackson Gabbard: Wow. So you talk about privacy and you talk about performance. You talk about sort of the utility of AIs. I guess I, I’d love to hear you describe how you make an AI useful.

You used the word performant agent, I think was the phrase you used, but, but like, what does that mean to, to the average muggle out there?

Doug Li: And this is purely my own personal experience with products. It isn’t particularly anything related to like AI, the technology itself. But fundamentally, like when you’re building a good product, you want people to very easily be able to express their intent. Um, and then you wanna fulfill that intent in a very helpful, and also very safe way.

I think a lot of things go into that, but ultimately it’s about focusing on the basics, making sure that you know, you can put tons and tons of UI controls or like, you know, fancy effects or whatever into your product. But ultimately it is about, go back to the basics of what it is that you’re trying to serve.

And I think when, you know, I was reading this, but basically when some of the folks on the ChatGPT team, you know, they’re kind of designing our mobile application, it was all about simplicity. It was about taking away all of the unnecessary stuff because ultimately, it’s just a conversation. A conversation between the user, the person who’s actually trying to work, or get something achieved, or ask a question and the AI assistant. And that’s it.

That’s all you need. You don’t need anything else aside from that. If anything like, focus your effort into just making that experience very, very good and take away everything else. So that could mean, hey, if latency is very bad, go and improve it. It’s not necessarily something people see, but you can’t have something that’s slow.

If the UI is confusing, see if there’s something you can take away or rearrange so that, you know, people naturally fall into the right patterns. Um, and I, I don’t, as I said, none of this is really AI-related. It’s just this is how you build a good product.

Jackson Gabbard: I love that you say this. It also connects very, very intimately with what we do here at Cord.

I guess I’d be curious to hear your take on this. How important do you feel it is for a product that has an AI assistant, for instance, I think lots of companies in the world are now thinking very seriously about how to make an AI assistant a core piece of their product experience. How important is it that they have things like presence and typing indicators and uh, sort of cues you, I would call them social cues, but it’s a little bit weird to call it social, where the other half of the social interaction is a bot. But, social cues that helps the user understand that they are connected to a thing that will be responsive to them?

Doug Li: I think it’s very important. Not knowing if something is going to respond back to you is obviously not a good experience. Like I think certainly before ChatGPT, you know, you’d go on these websites and there would, a lot of these websites have this little bubble at the bottom, left or bottom right that would pop up and say, ‘Hey, like, I’m a, I’m a bot. I’m here to help you with your questions’ and whatever.

And you, you would like, every single time I’ve used one of those things, I type a question in and either get no response or the response actually generates some sort of email and then it says, cool, like, ‘We will respond to you in a couple of hours’, but it looks like a chatbot, right? That’s not a good experience.

Um or like, the response in and of itself fundamentally wasn’t helpful. Like it was very robotic and canned responses about obvious things that, you know, just doing a quick Google search or whatever would’ve saved me more time and I’d be able to have gotten to answer faster than using this thing that these companies actually invested time and energy and effort in.

And so, I think being able to like, if, if you’re trying to integrate AI in kind of a chatbot/assistant way into your product, right? Yes. Obviously, making it very clear that the thing that you’re talking to is an AI assistant. It’s gonna respond to you right away so show that hey, like it’s busy, you know, generating completions. And actually if you can, just stream the responses. Because if you’re streaming it word by word, people are gonna have a way better experience than if you are just throwing the entire completion out there.

There are cases where you don’t wanna do that, right? Maybe you wanna moderate that response or you wanna, you know, check it or whatever, or add extra rendering to it. But sure, if you can, and if you think for your specific use case, it’s safe to do so, then having that extra bit of, you know, quick response to see that, hey, okay, cool, the AI is now saying something, I can actually see the words getting streamed back to the UI. Um, that in and of itself can save you seconds of latency and response time, depending on how long that completion is. Um, and so yeah, responsiveness is super, super important.

Jackson Gabbard: What I like about what you’re saying is it, it takes advantage of two things simultaneously.

One thing is it literally reduces the amount of time you’re waiting for some API response to happen because you’re getting partial responses in real time. Uh, and that’s just a faster response time. Literally, like wall time to go there and get it back, it’s faster. But the other thing this solves for is human wait time.

If you’re a human and you have hit enter and sent a message, and now you’re just waiting and you don’t know if it will be 300 milliseconds or three seconds, or three hours, it’s a rage quit moment. You will just stop using this thing.

But if you have a streaming response, if you’ve got character by character updates from the AI completion API streaming back into the UI, the user for one knows something is happening because they’re literally watching it.

And for two, you get to use up their time with them being entertained by seeing this thing complete the response. Like if you were to take that same amount of time and not show the completion, and just jump from nothing to the fully complete text, that’s actually a much, much more negative experience than watching character by character come in because you’re like, oh look, it’s doing the thing. Like I’m literally watching it unfold before me.

Doug Li: Yeah. It’s a better experience, but two, also, when you get into the science of how people read, people’s eyes move in these, you know, little jumps called saccades. And the thing is, most of the time when we stream responses back, depending, especially if you’re using like GPT, like 3.5 the responses are incredibly fast.

Like, people cannot read at that rate. And so, basically, as soon as you’re starting to see words on the screen, that is already useful to the user. ‘Cause the user is already starting to pass it. They’re already starting to read what that response is and understand, hey, does this help solve my problem or not?

Whereas if you kind of wait all the way until everything’s done then that person, you know, you, you’d be waiting probably for seconds, especially if you’re using, you know, a, a slow model or having a very large response, right? And then, then they, they actually start to read it. And so like, as, as a user, you’ve just spent time waiting for essentially nothing to show up when you could have already started reading the response.

Jackson Gabbard: I love it. So I think we’re at a good point to wrap up or to begin to wrap up. Before we say our goodbyes here, Doug, is there anything you want to just add, anything you wanna promote, anything you you wish the world knew about AI? Any message from the heart you want to deliver to the world?

Doug Li: Well, we’re here in London. If, you know, if you are interested in coming to work at OpenAI, definitely take a look at our careers page and see if there are potential roles for you.

We’re looking for extremely, extremely talented people from all sorts of fields. And so, yeah…

Jackson Gabbard: I think you folks are based in Kings Cross, is that right?

Doug Li: Yes.

Jackson Gabbard: That’s exciting. If you wanna work with the sort of engineer who flip-flops in and out of Facebook at his leisure, runs an AI startup, flips over to OpenAI Doug Lee could be your coworker.

Doug, I, I just wanna say thanks again for taking the time to come and chat with us. Um, it’s been a long time coming, you folks dunno, we’ve, we’ve been working very hard to coordinate this and today is the day we are here together recording this podcast. I’m super excited to have Doug here.

Um, yeah, I just wanna say thank you again. Super, super grateful for your time.

Doug Li: Cool. Yeah. Happy to be here. Thanks for having me.