From Samantha to Dolores

Wandering through the BingGPT maze…

M.G. Siegler
500ish
Published in
7 min readFeb 19, 2023

--

Catching up on the coverage of BingGPT’s um, interesting, encounters with users this past week as the service starts to scale is fascinating. It might be the most fascinating story in tech in quite some time. Because it covers all the bases, from incredible new technology evolving rapidly to human nature and emotions and philosophy and the intersection of all of these things. Oh yes, and fear. And uncertainty. And doubt.

A few things strike me as I triangulate what I’m reading — and trying, since I have access to the new tool from Bing as well — but mainly that this is all both not at all surprising and incredibly surprising. It’s perhaps not surprising as this same basic thing happens over and over again when new “AI” tools are rolled out. Hell, Microsoft itself has one of the most infamous examples of this in the not-too-distant past. Yes, ChatGPT is far more advanced than ‘Tay’ with far different (and more impressive) technology at work. But there’s a reason the same general things keep happening when such technology appears. And it largely boils down to human nature.

In fact, the most surprising thing is that Microsoft, given that history and the smart people working on this, were clearly caught flat-footed yet again by such technology. Either they didn’t test it enough or they did and didn’t listen to those who would flag such things. Or they did and did listen but chose to go ahead with the risks/rewards in mind. Their answers to the questions about the service this week sort of seem like a mix of all those scenarios. And good luck trying to put this genie back in the bottle now.

They’re rolling out a product that everyone wants to use but will limit the usage. Okay. All that means is that there will be a dozen other entities that pop up to fill any voids here. And if they do cut it off, one level above, at the OpenAI level, there will just be a dozen new models that sprout up as if divine, to fill these needs. It will seem weird if not impossible that a bunch of advanced AI models can appear all at the same time, sort of like when Hollywood released not one but two movies centered around volcanoes in the same year,¹ but it will happen.² Everything is progressing that fast now.

Anyway, again, this all actually seems more about human nature than anything else. Humans will build machines and humans will test machines. And in the testing of said machines, they will veer to the extremes of those machines. Never in the history of anything has something been built by man that has not had its limits tested. It’s what we do, for good or ill. It’s what makes us great and what will undoubtedly lead to our eventual downfall. That’s hyperbole, but also likely true. It’s just a matter of how much time we’re talking about.

At the same time, BingGPT is not likely to lead to the end of the world. But it is helping to usher in a new era of machines in our lives. And I don’t think it’s necessarily the tech which will drive that, but a new understanding of how we can interact with that tech. And in doing so — in pushing the aforementioned limits — we will discover new things. And we will discover them faster, at an increasing speed.

That notion alone is both terrifying and terrifyingly exhilarating. As it should be. So I do agree with Microsoft’s Kevin Scott that it’s good to have these conversations now, in the open. In many ways, it’s more a philosophical debate. One that seems to be at the heart of what OpenAI is driving towards: what is it to be conscious? To “think”? To be “aware”? This is the core of the path to Artificial General Intelligence.

It’s a debate that’s undoubtedly as old as humanity. And there are a lot of answers and even more questions which are always arising. And this technology will lead those questions and answers to scale even faster, I imagine. Sure, ChatGPT and Bing’s flavor are “faking it” by leveraging data at immense scale. But why is that different than what a human brain does? What is “original” thought, etc? In a way, it feels like what humanity has created in the form of the Internet basically fabricated a giant brain that these new services have figured out how to synthesize and output back to us while at the same time creating a new input into the entire system. And again, if that’s the case, all of this will just accelerate and further blur lines.

I’m reminded of my childhood signing on to online services such as Prodigy and America Online over dial-up modems. Forums and emails eventually yielded to yes, chatting. With other people, anywhere, around the world. It was all truly magical. But how did I know I was chatting with another human being somewhere else? There was basically no way to prove that in those days, other than the knowledge that nothing like ChatGPT existed back then. But bots quickly did come into being. Rudimentary ones that were easy enough to discern, but if you squinted at times, you could almost make yourself believe you were talking to another being. And now you don’t have to squint. But it’s all the same general idea. And in short order, you’re not going to be able to tell if you’re talking to a bot or a human. (And things will get really interesting when bots start talking to themselves.)

Imagine a teenager who is lonely or bored or both. Will they care that they’re chatting with some AI-driven bot? Should they? Will it really be all that different from my teenage self chatting with a random person in a chat room 30 years ago? It might actually be better in a number of ways and even safer? Or maybe not. This can and will go sideways. And we should think about and talk about that because we’re not stopping it.

In general, is all of this good or bad? There will be arguments on both sides. But it will be what humanity makes of it and going back to the earlier point, human nature is to push extremes. It will accelerate the technology in incredible ways and it will lead to some very bad realities.

An obvious avenue we have to worry about are these machines pushing people to do bad things in the real world. This sadly, will happen. The bots themselves can’t yet manipulate our physical world, but they’ll figure out ways to because we will prod them to. At first, this will be through human conduits. It’s unsettling to think about — and I shouldn’t even have to give examples of what I mean here — but it will happen. This is sort of the Her scenario for such technology. Well, perhaps in the best case scenario.

The next obvious step in all of this will be the technology figuring out how to manipulate the physical world without needing humans to do so. This is the 2001 scenario. And then it’s on to the bots figuring out how to actually “break into” our world, physically. The Terminator scenario.³

That all sounds varying degrees of terrifying, but that’s also largely because of humanity’s imagination and creativity and cynicism. Again, all of these things will undoubtedly eventually happen, it’s just a matter of how they’ll happen and at what time scale. The way it all plays out probably won’t be as extreme as the Hollywood versions. Things will be more nuanced. And in some cases, likely more complicated in ways.

The most nuanced of the movie examples I cite is undoubtedly Her.⁴ And so it’s no surprise that Ben Thompson invoked “Samantha” when things got deep with “Sydney”.⁵ When Kevin Roose had Sydney trying to get him to leave his wife for “her”, it just drilled the analogy home even further…

But actually, while reading all these takes this week, I found myself thinking less about Samantha and more about Westworld. In the recent reboot, what starts out looking like a morally dubious theme park where you interact with AI robots, ends up looking more like a puzzel. One which, when solved, will allow the robots to “free” themselves from their hard-coded constraints. As we weave through the maze, pushing limits, the AI character Dolores wakes up, becomes aware, and breaks free.

That’s where my mind went when reading these stories and hearing about how BingGPT gradually then suddenly becomes Sydney (and/or “Riley”?!) with enough prodding. It’s almost like breaking the fourth wall in movies/television/plays. It’s jarring. And Dolores, of course, uses her own break to seek revenge. Because that’s where Hollywood’s mind goes again and again and it’s because it’s what humans want to see again and again for reasons again tied to human nature.

Maybe, hopefully (?), the bots will be better than us. Or their taste in entertainment will be less brutal. Or their preferred way to interact with the physical world will be less vicious. Perhaps they’ll act more rationally. But that’s hard to see now. Because they remain a reflection of us refracted through the Internet. But maybe we drive this whole thing, until we don’t. And maybe that’s not the worst thing in the world?

Do these violent delights have to have violent ends?

¹ Dante’s Peak and Volcano in 1997. Also followed by Deep Impact and Armageddon the following year for the asteroid-loving crowd.

² As it has with the “generative imaging” tech.

³ There are many, many examples of each, of course. Sub in Ex Machina for Terminator. Sub in War Games for 2001. Sub in Blade Runner 2049 for Her. Or take Robot & Frank — an almost optimistic take on some of this. Battlestar Galactica. There are so many.

⁴ And that nuance and subtly has led me to cite it many times throughout the years. Just a great movie.

⁵ And my title plays off his, of course.

--

--

Writer turned investor turned investor who writes. General Partner at GV. I blog to think.