17 May 2023

The internet is changing, so what do we need to know?

9:01 pm on 17 May 2023
Image of a doctor holding a smart phone

Who do you trust for medical advice: a human doctor, or an AI-powered chatbot? Photo: Supplied

Changes in AI mean the internet as we know it is over. The possibilities ahead are nuanced and complex and it is vital that we consider them, futurist and academic Amy Webb says.

It is Webb's job to look at where technology is taking us as head of the Future Today Institute. She is an author and an assistant professor at New York's Stern School of business.

Webb told Afternoons we are in the middle of a perfect storm of advances in code, generative AI and hardware that could vastly change many sectors of life as we know it.

And this could go one of two ways; solving some of humanities biggest challenges or creating ones we never imagined.

Yesterday, the founder of ChatGPT testified to the US congress that regulation of AI was vital.

Webb argued we needed governments to move quickly to guide and regulate AI so that we control the direction it is going, not the other way around.

Webb says futurists "use data to build out models" with more complexity and variability than statisticians typically consider, to extrapolate into the future to give insight into what could happen and make good decisions now to help put us onto the right path.

"In times of deep uncertainty or soul-crushing change - which describes the moment we're in right now - my phone rings a lot... companies are much more willing to invest in the future when they feel terrified in the present," she says.

"The challenge of course is making that investment in a more ongoing way."

The focus of most discussions about AI currently were ChatGPT and Open AI, which were just one tiny piece of the AI "ecosystem" now. So that narrow focus was only one small part of what people should be considering when thinking about the future, she said, and that applied to both large and small businesses, the public sector and politicians.

"It's not enough any more to brainstorm 'what ifs' about the future, you kind of have to get down and do the work of really figuring out what it's going to look like."

Webb said a sensible idea to consider was how were we making our choices - "because each choice has reverberating influences".

And that from now the internet is searching us, instead of us searching on it.

"When most people talk about AI who don't work in the field, they are generally talking about automated systems - so like algorithms that use data to make decisions. What we are currently fascinated by is chatbots.

"But frankly AI is just the next era of computing, encompassing a whole bunch of different technologies, and you use it every day.

"In fact, we're using it because I'm in New York, you're in New Zealand, and were using really complicated technology to make this connection work and have it sound like we're right next to each other."

It was not practical to consider the wide complexity of AI for most people, but the discussions still needed to happen, so it would be helpful if more simple and informed information was easily accessible, she believed.

"A lot of our data have been used over the years to help us use technology, just in an easier and more automated way.

"Imagine a future in which rather than you typing in some words into a search and getting endless pages of things you have to scroll, instead, just getting the answer or getting the action accomplished that you want.

"And maybe in that future you're no longer looking at a two-dimensional web page, but instead you're having a conversation, almost like having an assistant in your pocket.

"For that future to happen - which is very much in progress right now - we need to change some of how search works. Which means that you and your data are very much being surveilled as these systems start to evolve."

The possibilities for both good and bad uses of AI were "endless", she said.

"A lot could go wrong," even if we had good intentions.

For example, when the human genome was first sequenced, it took 13 years and cost close to $3 billion.

"Right now it's cheaper to sequence a genome than it is to buy a nice television or even an ipad and that number keeps going down."

So when Covid-19 first emerged, scientists were able to very quickly sequence the genome of the virus that caused the disease, and use computers to help understand the virus' characteristics: "To get us to the point where we had new types of vaccines in the form of messenger-RNA," Webb said.

"That all happened shockingly fast because all of that technology had been in development and a lot of it relied on AI."

GPT-4 sign on website displayed on a laptop screen and OpenAI logo displayed on a phone screen are seen in this illustration photo taken in Poland on March 14, 2023. (Photo by Jakub Porzycki/NurPhoto) (Photo by Jakub Porzycki / NurPhoto / NurPhoto via AFP)

Photo: JAKUB PORZYCKI

That was a good use of AI, but conversely there remained the possibility that harmful pathogens could be created using AI, she said.

Or AI could be used to create Deep Fakes which hackers could possibly use by accessing a hospital's network and swapping the results from an MRI scan on a patient - perhaps a significant world leader - and placing their own Deep Fake version of the MRI scan into the system showing something like a fake cancer growth, or no cancer growth, she said.

There were many variations on how this technology could be used by bad actors, and it was in problematic hands if the large corporations that wield it did not necessarily have the best interests of humanity at heart. Which "they certainly do not," she said.

The lack of regulation on AI was both concerning and frustrating to her.

Webb said there was a long history of leaders of significant tech companies asking the US congress for more regulation, but little happening.

"They've had a long time to get their acts together. In February of 2020, Open AI put out a press release about the second generation of this very powerful technology that we're all talking about today - this would have been GPT2, the press release said this is so powerful and so dangerous we can't release it.

"That was three years ago - what's everybody been doing? Shouldn't you have been talking about guard rails or regulation? Three years is plenty of time to put together a draft."

Another scenario that could see a beneficial use of AI could be in helping agriculture develop ways to eliminate challenges caused by more extreme weather, to keep production supplies more constant and less vulnerable to shock.

She did not believe emerging AI uses would completely wipe out the job market in the way some commentators had described, though thought many jobs would change significantly.

But the transition would be significant, Webb said. And she said it is important that people consider the nuances involved in the discussions about AI, rather than just being caught up in headline-grabbing extremes.

Get the RNZ app

for ad-free news and current affairs