What Artificial General Intelligence Could Mean For Our Future
29.08 minutes
Each week, tech companies trumpet yet another advance in artificial intelligence, from better chat services to image and video generators that spend less time in the uncanny valley. But the holy grail for AI companies is known as AGI, or artificial general intelligence—a technology that can meet or outperform human capabilities on any number of tasks, not just chat or images.
The roadmap and schedule for getting to AGI depends on who you talk to and their precise definition of AGI. Some say it’s just around the corner, while other experts point a few years down the road. In fact, it’s not entirely clear whether current approaches to AI tech will be the ones that yield a true artificial general intelligence.
Hosts Ira Flatow and Flora Lichtman talk with Will Douglas Heaven, who reports on AI for MIT Technology Review; and Dr. Rumman Chowdhury, who specializes in ethical, explainable and transparent AI, about the path to AGI and its potential impacts on society.
Keep up with the week’s essential science news headlines, plus stories that offer extra joy and awe.
Invest in quality science journalism by making a donation to Science Friday.
Will Douglas Heaven is the senior editor for AI at MIT Technology Review. He’s based in London, England.
Dr. Rumman Chowdhury is founder and CEO of Parity Consulting, a Responsible AI Fellow at the Berkman Klein Center of Harvard University, and a visiting researcher in the NYU Tandon School of Engineering in New York, New York.
IRA FLATOW: Every week, it feels like there’s another AI advance. Some company produces a system that will write proposals better than you can, or makes more lifelike pictures of videos, or wrangles data in a new way. And most of these systems are still limited to a few specialized tricks in what they can do.
But how close are companies to creating something that can virtually think on its own or outperform humans on any task? what researchers are calling AGI– Artificial General Intelligence. And that’s what we are talking about this hour. And we want to hear from you.
FLORA LICHTMAN: Yes, how do you feel about this AGI advance? Is it something you’re looking forward to, or are you dreading it? What are your hopes or your fears for how AGI might impact your life? Call us. Our number is 877-925-9174. That’s 877-925-9174.
IRA FLATOW: Let’s get into this. Let me introduce our guests. Will Douglas Heaven is a senior editor for AI coverage at MIT Technology Review. He’s based in the UK. Welcome back.
WILL HEAVEN: Hi, it’s good to be back.
IRA FLATOW: Nice to have you. And Dr. Rumman Chowdhury, founder and CEO of Parity Consulting and the responsible AI fellow at the Berkman Klein Center at Harvard. You know where that is– in Cambridge, Massachusetts. She’s also a visiting researcher at the NYU Tandon School of Engineering, and she’s with us in our New York studios. Welcome back.
RUMMAN CHOWDHURY: Thank you.
IRA FLATOW: Nice to have you. Everybody’s heard of AGI. Will, what do we mean when we say AGI? What does that mean? How is it defined?
WILL HEAVEN: I have no idea.
[CHUCKLING]
And that’s–
IRA FLATOW: Thank you, folks.
RUMMAN CHOWDHURY: And we’re done.
WILL HEAVEN: The end. No, seriously– I mean, that is a fascinating question. My whole problem with AGI, the term, is it means so many different things to different people, and it’s sort of it’s changed its meaning over the last few years. But for the sake of getting the conversation going, what it seems to mean to people now– the companies putting out their blog posts and their manifestos about what they’re building– is an AI system that can do a wide range of tasks, cognitive tasks, as well as a human can.
That’s about as good a definition as you’re going to find. But, I mean, my problem is that there’s so many words in that themselves need defining. Like, what is a cognitive task? What does it mean to do it as well as a human? How many cognitive tasks do we need to call an AI system an AGI? Yeah, we’ll get on to this, I’m sure. But that’s what we’re talking about.
IRA FLATOW: Rumman?
RUMMAN CHOWDHURY: So that’s the question of the hour, of the day, the year. And it’s interesting because it’s on purpose. It’s to leave the vagueness where we fill in the narrative ourselves and scare ourselves. But actually, if you look at what OpenAI has defined AGI as, it’s the automation of tasks of economic value.
And this is what happens when corporations get to define what intelligence means. They pin it to things that are economically productive. And I think that is a very important distinction from simply saying cognitive tasks. And Will’s right. Yesterday, DeepMind had a blog post where they pretty much defined it as the automation of most human cognitive tasks. And I agree with Will. Who knows what that means?
IRA FLATOW: Does that mean self-awareness?
RUMMAN CHOWDHURY: It absolutely does not mean self-awareness. Intelligence and sentience are two totally different things– completely different things.
WILL HEAVEN: If you think AGI is a muddy question, then sentience then– we’ll be here for ages and not get anywhere.
IRA FLATOW: So so far, I’ve mentioned things like chatbots, like questions that get answered and things that make images or video clips. But how do you make the leap from something that’s good at doing these sorts of things, Rumman, to something that’s good at doing all sorts of things?
RUMMAN CHOWDHURY: How do you make the technical leap?
IRA FLATOW: Yeah, what is that leap? How is that done? Is it learning, teaching computers to do different things? Is it sucking all the energy up like we hear these computers do?
RUMMAN CHOWDHURY: I mean, if we’re going to pin this to defining AGI, I think the goal would be that it’s able to do these tasks without us explicitly teaching the model to do so. What’s captured the imagination with generative artificial intelligence is that it seems as if we’re just handing over a pile of random looking information, and it’s putting together patterns, these models.
And that is actually an impressive feat. What it is alive, or replacing humans, et cetera. I think what these things are in the real world is very, very different from just capability performance. So one of the interesting things to think about is, when these new models come out– and like you said, they seem to come out like three a week, and they say it’s performing better than x, y, and z, the important thing to ask is, what is the measurement by which we’re saying it’s so impressive? And that’s publicly out there.
FLORA LICHTMAN: I want to bring AI into this conversation.
RUMMAN CHOWDHURY: Like, literally an AI?
FLORA LICHTMAN: Yeah, literally an AI. I asked Google’s AI assistant, Gemini, what we should ask you two. And we workshopped it a few times, but here’s where we got. And let’s see what you guys have to say about this.
GEMINI: If I could only ask one question to AI experts about AGI and humanity’s preparation, it would be, considering the inherent uncertainties surrounding the development and capabilities of AGI, what is the single most proactive and universally beneficial step that humanity should take now to prepare for its potential arrival, regardless of the specific form AGI might take. This question aims to–
FLORA LICHTMAN: It goes on, and on, and on.
[CHUCKLING]
What do you think?
RUMMAN CHOWDHURY: I’m going to punt that one to Will.
[CHUCKLING]
FLORA LICHTMAN: What is the most proactive and universally beneficial step humanity should take to prepare for AGI’s arrival?
WILL HEAVEN: I mean, this is I don’t think AGI is coming anytime soon. And like, I’m not really sure what that would be when it came. So just like a little side note there– I think at some point probably soon, because so many companies have said they’re building it and it is around the corner, probably someone will just make a definition and say, we’re calling this thing we’ve just made AGI.
So if the question that Gemini is asking is, what do we need to do to prepare for that, then it kind of depends what that is. But more constructively, I would like us to get off this obsession with AGI and focus on the specific technical advances that we are seeing that are coming along really fast. And it’s so easy to sort of– dismissing the idea that AGI is around the corner is not to dismiss how amazing the advances have been in video generation, in chatbots over the last few years.
I’m constantly wowed. And it’s wonderful doing my job, like seeing the latest thing that’s come out and talking to the people that are making it. I’m constantly awed by how good this tech has got. And I’d like to just sit with the capabilities that we have and think about what impacts those are going to have on the world. And there’s enough to deal with just with the AI we have today without spending so many hours and words about preparing for AGI.
IRA FLATOW: Well, let me go to Samuel in Rochester, New York, who may have some words like that. Samuel, welcome to Science Friday.
SAMUEL: Hi, can you hear me?
IRA FLATOW: Yes, go right ahead.
SAMUEL: Hi I wanted to just say that I agree with what’s been said, where we have a lot of very good image generators and chatbots. But those are pretty far away from something that can reason cognitively and generate new ideas. It’s always kind of a– we’re making amalgamations of things that are already on the internet. And the jump from that to– or kind of summarizing to generating something new, something that hasn’t been done or said before, that’s a leap that I think hasn’t been made yet. And the trend I see is that tech companies can slap AI-powered on anything now, and it makes investors happy. But the results, the profitability of it, the advancements, it’s hard to know what the scale actually will be of that, the impact of that.
IRA FLATOW: Good point. Rumman?
RUMMAN CHOWDHURY: Yeah, there was a report last year, I believe, or two years ago, that pretty much dug into all the companies claiming that their products were AI-powered. It was in the UK. And it found about 60% of them had no AI under the hood. First of all, we have a very slippery slope definition of AI itself. And then now it’s translated into AGI.
And again, to Will’s point, the analogy I give is how we have gone down this same slippery slope of self-driving cars. Remember the earliest self-driving cars, and what we imagined is like we’d get into this pod, and take a nap, and it whisked us off to where we’re going?
IRA FLATOW: Yeah, The Jetsons.
RUMMAN CHOWDHURY: Right. But now, according to Elon Musk, we have self-driving cars, which we still have to sit there in traffic with our hands on 10 and 2 and our foot on the brake. And this car is, quote, unquote, “driving.” But if it got into an accident, we are liable. So you’re still effectively absorbing all the stress of driving with none of the self-driving.
IRA FLATOW: But let’s also go right to the main point that I see, is that the reason all AI exists and AGI is being developed, it’s about the money.
RUMMAN CHOWDHURY: 100%.
IRA FLATOW: Isn’t it?
RUMMAN CHOWDHURY: It is. And actually, OpenAI and Microsoft have defined what AGI is with a monetary value. They have said it is when they have earned $200 billion of revenue. Then, they will slap on a sticker and say, we have AGI.
IRA FLATOW: So do you agree, Will? It’s about the money here?
WILL HEAVEN: Yeah, I do. Yeah, it was like, I’m happy that we were reminded of that definition. I think that’s probably the best definition of AGI we have. At least it’s precise and clear. But yeah, absolutely. It’s so hard to talk about AI advances and really get into the details of what these systems can and can’t do because the tech is being developed by companies.
They’re doing it for profit. Obviously, they’re going to make as big a claim for their new tech as they can. And again, genuinely, when you get a lot of these demos in your hands, they are truly impressive. But they’re never going to be as impressive as the companies selling them want them to be.
RUMMAN CHOWDHURY: And the story within the story is that, for many years, companies have poached the brightest scientists and minds from academic institutions. In fact, they poached them straight out of their PhD programs. If you go visit the University of Cambridge, Oxford, MIT, Stanford, there’s a very close tie to every single major model developer, and that’s on purpose. So there is something also to be said here about the lack of independent researchers who are able to do this work without getting funding or just explicitly being hired by these companies.
IRA FLATOW: We’re talking about AI this hour on Science Friday, and we’d like to get your calls. We’re going to have to go to a break, but don’t forget our number. 877-925-9174. Talking with Dr. Roman Chowdhury and also with Will Douglas Heaven. And we’ll be right back after this short break. Stay with us.
This is Science Friday. I’m Ira Flatow with Flora Lichtman. We’re talking this hour about artificial general intelligence, systems as smart or maybe smarter than any person in any task, which is not here yet, but could be soon. And we want to hear from you. Our number is 877-925-9174. 877-925-9174. And let’s go to the phones to Chris in Scottsdale, Arizona. Hi, Chris.
CHRIS: Hey, Ira. Can you hear me?
IRA FLATOW: I sure can. Go ahead.
CHRIS: Excellent. Well, I was just going to mention that I use AI quite a bit for nutrition analysis. So it helps me come up with plans of what I’m going to eat during the day. And I love it. One question about that that I would have– and it helps me with recipes too. But do they think the memory on these things is going to get better or we’ll have personalized AI that can remember what we ate a month ago? Because what I find is, I have ChatGPT, and I’ve tried Grok, and both of them forget. If you go back to it after a week, now you’re having a brand new conversation.
So one thing would be about the memory. And I had just second question about what will happen with AI. Do you think it’s– or do your experts think it’s more likely that we’ll have a situation where they’ll replace the jobs loss from AI with universal basic income– something like that? Or do you think it would be something like an assisted situation, where all of our jobs are assisted. We tell AI what to do, and it does the job for us? So that’s my two questions.
IRA FLATOW: Two meaty questions, Chris. Thanks for calling. I’m going to divide it up. Will, you want to take the first half of that?
WILL HEAVEN: Sure. Yeah, the memory is a feature that a bunch of these companies making these chatbots have either already added or are talking about adding to the chatbot. I think it’s something– an option that you can turn off or in ChatGPT, and probably in the others, like Gemini and Grok. So I don’t know if– Chris, wasn’t it– the caller is using that feature.
I’m not an expert on the actual different tiers, there are different paid tiers and free versions of these chatbots. But it’s certainly something which exists in some of them. And if it doesn’t already, then I know that’s what people are aiming to improve, like this idea that, yeah, this will be your personal little buddy that knows more about you than anyone else and can recommend stuff is a vision.
IRA FLATOW: And Rumman what about jobs? Is it taking our jobs?
RUMMAN CHOWDHURY: Yeah, I also can chime in on the first one. I use Perplexity to help me do research. They actually have something called threads. And then a thread can be a particular topic, and you can go back to that. So not to promote any particular AI. It just happens to be the one that I use for that reason.
Future of work– I have many thoughts on future of work. Well, first of all, I want to start by saying there is no finite amount of work we do as humans. I think one of the fallacies of this “there will be no jobs” conversation is there’s a core assumption that is wrong– that there is a finite amount of work that we do. Any sort of technological advancement has actually not given us less work, but more work.
How much more available are we now that we can be found on these little devices, our phones, 24/7? We used to leave work at 5:00. Very few of us remember that time anymore. So email, internet did not give us less work. It actually gave us more things to do.
And there’s some empirical evidence to back this. So there’s three studies I like to talk about. The very first one came out last year. It’s by this labor economist, Dr. Daron Acemoglu, out of MIT,– like, brilliant labor economist. And he did a macroeconomic measure of the impact of AI over the next 10 years and found that total factor productivity– so all of the stuff we produce in the world– sub-1% will be automated by AI.
But that’s not nothing. I mean, sub-1% of what the entire world produces is still something. And what he talks about that’s kind of interesting– I think this is what captures the imagination– most automation tends to get rid of blue collar jobs or rote tasks. So email automated sending mail. But what is interesting in capturing our minds about AI is that it automates knowledge tasks, which we’ve never had before.
So he talks about how the distribution between blue and white collar jobs is actually fairly even, slash, maybe even leading a bit more towards lower tier knowledge jobs. The second paper I’d like to talk about is called GPT or GPTS. It came out in 2023. It was actually by some researchers at OpenAI, as well as some economists, talking about how different sectors– what may be automated? What threat is faced by different sectors? So it’s going from big picture to industry level.
And the rough takeaway would be roughly that 80% of jobs will see about 20% automated away, 20% of jobs will see about 80% automated away. And they were talking about jobs like paralegal, et cetera, so research type jobs, so knowledge jobs, which is interesting. The third one just came out last week. Really interesting, and this is getting super nitty gritty about future of work.
Harvard Business School and some other folks worked with Procter Gamble. And they did this study across over 900 employees. They did kind of like a competition, where it was individual humans, individual human plus AI– like teams, teams plus AI. And they looked at things like quality of work, time to completion, how well it augmented people already with a skill set, and how well it augmented people without a skill set in a particular topic.
It’s lots of details, but pretty much, the takeaway is that human plus AI is better than human alone is better than AI alone. So it’s one of those things where it is a productivity booster. And what that means is probably what it has always meant for us when we’ve gotten new productivity technology, which is that we will just have more stuff to do.
FLORA LICHTMAN: When we think about AI, we talk about it as a reflection of us, that it learns from us, it learns from our data. Can we teach AI to be better than us?
RUMMAN CHOWDHURY: Oh, that’s a good question. I think AI is capable of evaluating data at a scale that is hard for humans to do. That’s why the output of these models can be so impressive. So the short answer is yes. The longer, more complicated answer is what do you mean by better?
FLORA LICHTMAN: And I mean it specifically, like doesn’t cheat, is more ethical. When people think about these sort of doomsday scenarios with AI, they’re like, oh, AI is going to scheme and take down humanity. Can you teach AI ethics?
RUMMAN CHOWDHURY: The short answer is yes. And actually, a lot of these scenarios where AI, quote, unquote, “cheats,” it has no normative judgment. It doesn’t understand good and bad. So even predating GenAI, I remember some of the earliest models coming out of DeepMind and some of the research bodies, they would play video games. And it would do things like race a car backwards, or it would shoot everybody else in the game and then pick up all the goodies.
But that is not the AI being evil. We have decided that is evil because we made rules. And we implicitly know, if I’m playing a game with other people, what I should not do is get rid of everybody else so I can slowly pick up all the goodies. The AI is simply optimizing for what you have told it to do, like in this very blunt way.
If you are of a particular age and I am of a particular age, and you read like Amelia Bedelia as a kid, think of it as Amelia Bedelia. You literally are like– I don’t know– make me a cake. And it will just quite literally–
FLORA LICHTMAN: Very literal. Yeah.
RUMMAN CHOWDHURY: –yes. And a lot of these issues of AI gone awry, it actually can be boiled down to a misspecified objective function. You are telling it to do something. You actually have to think through all the ways in which you are making assumptions, because you have been socialized to do things a certain way. And like, how would Amelia Bedelia understand this?
FLORA LICHTMAN: That’s going to be the new way that I interact with GPT.
IRA FLATOW: Let’s go to the phones to Anton in Phoenix. Hi, Anton. Welcome to Science Friday.
ANTON: Hey, there. Thank you. Can you hear me OK?
IRA FLATOW: Sure.
ANTON: Yeah, so I just wanted to address the earlier question that you guys asked Gemini, which is if there’s one thing that you would want to focus humanity on, what would it be? And I’m thinking about– like, somebody said doomsday scenario. And oftentimes, when we talk about doomsday scenarios, we’re thinking about the technology getting smarter than us, and then deciding that we’re expendable, and all of that.
But I think that’s kind of misguided. It makes for a good science fiction novel, but I think the problem– if that was the problem, then it would be a technology problem. That would be easy. The problem I see is a people problem. So the NRA says guns don’t kill people, people kill people. And I think we really need to focus on maybe two things.
One is, who’s controlling the AI, both in terms of training it as well as using it to inference to actually do things? But I think the bigger thing is really, if you think about artificial general intelligence or artificial superintelligence, where it has– whatever– godlike intelligence, an AI is not going to necessarily see humans as a threat unless humans are competing for the same resources with the AI. So that could be jobs. It could be electricity. It could be any number of things.
And I think it’s– the question that I think about is, how do we arrive at a place where AI isn’t being manipulated by humans for human ends?
IRA FLATOW: Yeah, OK.
ANTON: And just so one example. OK, go ahead. Yep.
IRA FLATOW: Yeah, that’s a good question. Let me get an answer to it because– I mean, if the point of AI is to make money, it’s going to be manipulated to make money.
RUMMAN CHOWDHURY: I mean, it already is. If we think about where money is being spent to build AI capabilities, companies have conveniently found the alignment of things people are willing to spend money on, cross, things that are also important to us. It’s not surprising that health care has been one of the primary applications. There’s so much money to be made in health care.
But also we want to lead better lives. The other one people talk about quite a bit is education. But no one no one is talking about things that are maybe less profitable but also good for humanity. And I appreciate the statement about, let’s think about the access or the people behind the wheel. A lot of these doomsday scenarios are very fantastical. What if AI sets off nuclear weapons? Why the hell did you give AI access to be able to set off– (CHUCKLING) you can just not do that.
IRA FLATOW: For those of people who worry about the singularity– I mean, when AI is smarter than us and takes– we become subservient.
RUMMAN CHOWDHURY: I mean, I think most AI is smarter than me from an ability to answer Jeopardy questions perspective. I probably couldn’t beat the average AI system.
WILL HEAVEN: Can I jump in on that?
FLORA LICHTMAN: Yes, please.
WILL HEAVEN: What gets me about all these doomsday scenarios is this weird sense of inevitability that this technology is just going to appear and squash us puny humans. We don’t have to make this. We don’t have to make it have the nuclear codes, as you said. We don’t have to make it so that it has any power over us at all.
IRA FLATOW: But we also already have some advances in medicine, where doctors are doing things doctors couldn’t do. Aren’t there already positive results, Will, of using AI?
WILL HEAVEN: Oh, yeah– many. And medicine is a great example. I mean, just the everyday conveniences that we’re already seeing from chatbots, I think, are great. So I don’t want to come– in these conversations where we go straight to AGI, I come across as sort of a naysaying crank, which is not a good professional look for someone who is very much, and has been for more than a decade, a champion of this technology, which I think is amazing. It just gets derailed. I think a lot of the interesting, brilliant things that we could talk about get derailed when we talk about doomsday scenarios–
FLORA LICHTMAN: Well, what are they? What are the interesting things that we should be talking about?
RUMMAN CHOWDHURY: I can chime in on some of that. I mean, we are likely to cure many cancers in our lifetime because of the advanced protein folding AI-driven technologies that have been created. This is a fact. We have advances in genomics and medicine because of the models that have been made there.
We have better weather prediction models. And I live part time in Texas, and hurricanes are a very big deal. We have better weather prediction models that can tell us weeks in advance that a hurricane may be coming because of AI. And the thing is this just won’t capture the imagination the way a, quote, unquote, “talking humanoid bot” idea will. But all of that is AI. As Will is saying, it’s a disservice to have such a focus driven by multiple narratives, companies included, to push us to look at AGI when we actually can celebrate a lot of the great stuff that AI is being used for today.
IRA FLATOW: Let’s go to the phones to Marlena in Washington State. Hi, there. Welcome to Science Friday.
MARLENA: Hi, can you hear me.
IRA FLATOW: Yes, I can.
MARLENA: OK. My question is, what is AI going to do to stop sucking up all the electricity in our environment? I live in a small rural town. And a small rural town really close to ours has made a deal, and they’ve built a big data center. And we all know this is how AI generates all its juice.
And it made a deal with this town waiving the employment flag. And now, these residents in this small town are experiencing rolling blackouts. I call this predatory behavior. And I would like to know what AI– what these billionaire owners of AI are going to do to be protective of people and save more of the energy in our environment? I mean, come on. This is global warming, people.
IRA FLATOW: Sounds like Marlena’s mad as hell and not going to take it anymore. Will, what do you say to that? Good point?
WILL HEAVEN: Yeah, I think that’s a really good point, especially if you had this affecting your neighborhood. I think we are going to see that. These massive data centers get set up, and they suck the power out of the local grid. So there’s lots of things that could be done.
And let’s let that hang in the air for a minute. There’s a lot of work being done to reduce the size of models. And a smaller model can do many of the things that a larger model can do for less power. There are things that could be done around the way that these models are trained– train them more efficiently.
Rather than just throw every single bit of data you can scrape up at them, maybe curate that data and show them data that’s actually going to be more useful. So the training steps could be fewer, again, using less electricity. That’s all on the side of actually building the models. The data centers, of course, are used then to run the models.
We’re all invoking ChatGPT for our recipes and everything else. And every time we do that, it’s sucking up a lot of power. So I mean, we could be making more efficient chips. We could be running on renewable sources of energy and finding ways to store that energy in the data centers with batteries, et cetera. So all of which is just to say there are solutions available.
Will they happen is a completely different question, because right now, this is a race to the bottom. All these companies really, really having invested everything they have into this race need to come out on top with the punchiest most powerful AI model. And I think the sustainability needs are going to be an afterthought.
IRA FLATOW: 30 seconds to go, Rumman.
RUMMAN CHOWDHURY: Well, Microsoft is rebooting Three Mile Island for those who are local, who know what that is.
IRA FLATOW: I remember it well.
RUMMAN CHOWDHURY: Yes. And when pressed on this, Sam Altman was sort of hand-waving over, we should have fission technology in our lifetimes and everything will be fine. So it seems like they too are banking on scientific advancements to do the work for them.
IRA FLATOW: We’ve run out of time. I’d like to thank my guests– Will Douglas Heaven, senior editor for AI coverage at MIT Technology Review, and Dr. Rumman Chowdhury, founder and CEO of Parity Consulting and the responsible AI fellow at the Berkman Klein Center at Harvard. Thank you both for taking time to be with us today.
Copyright © 2025 Science Friday Initiative. All rights reserved. Science Friday transcripts are produced on a tight deadline by 3Play Media. Fidelity to the original aired/published audio or video file might vary, and text might be updated or amended in the future. For the authoritative record of Science Friday’s programming, please visit the original aired/published recording. For terms of use and more information, visit our policies pages at http://www.sciencefriday.com/about/policies/
As Science Friday’s director and senior producer, Charles Bergquist channels the chaos of a live production studio into something sounding like a radio program. Favorite topics include planetary sciences, chemistry, materials, and shiny things with blinking lights.
Ira Flatow is the founder and host of Science Friday. His green thumb has revived many an office plant at death’s door.
Flora Lichtman is a host of Science Friday. In a previous life, she lived on a research ship where apertivi were served on the top deck, hoisted there via pulley by the ship’s chef.