This post was originally published in Fast Company.

In 1984’s The Terminator, humankind is imperiled when the Skynet computer network—created by the menacingly named Cyberdyne Systems—becomes “self-aware” and turns against its makers. This July, a Japanese company, also called Cyberdyne, unveiled a brand-new line of artificial intelligence–aided robots. The wheeled automatons are set to be deployed as cleaners and porters in Tokyo’s Haneda Airport. For now, they’re hardly a threat to humanity—unless, of course, you happen to be a cleaner or porter put out of work.

Recent months have been rife with debate over the future of artificial intelligence and the relative dangers and advantages to humans that such a future might hold. Make no mistake: Robots will be coming for more of our jobs in the years ahead.

The fact is we’re now on the cusp of a “Second Machine Age,” one powered not by clanging factory equipment but by automation, artificial intelligence, and robotics. Self-driving cars are expected to be widespread in the coming decade. Already, automated checkout technology has replaced cashiers, and computerized check-in is the norm at airports. Just like the Industrial Revolution more than 200 years ago, the AI and robotics revolution is poised to touch virtually every aspect of our lives—from health and personal relations to government and, of course, the workplace.

Image by US Navy Research under CC BY 2.0

But there’s one important difference this time around. The Industrial Revolution ended up being a net creator of jobs on a massive scale. There’s a real possibility the AI revolution, by contrast, will be a job killer—and on an equally vast scale. This won’t happen all at once, of course. But considering that the pace of change only stands to accelerate, is it too soon to start asking: How do we prepare for a future where jobs themselves may be in short supply?

Sci-fi nightmare or real-life threat?

The idea of robots taking our jobs turns out to be far from a fringe theory. Of 1,896 prominent scientists, analysts, and engineers questioned in a recent Pew survey on the future of jobs, 48% “envision a future in which robots and digital agents have displaced significant numbers of both blue- and white-collar workers.”

Among the most vulnerable groups are professional drivers like truckers and taxi drivers. By 2020, GM, Mercedes, Audi, Nissan, BMW, Renault, Tesla, and Google all plan to be selling autonomous vehicles in some form. Uber’s CEO Travis Kalanick has already mentioned plans to one day replace all of the company’s drivers with self-driving cars. Other fields where displacement is imminent (or already happening) include low-skill jobs in customer service, health care, and home maintenance.

Image via Google

It doesn’t end there, though. White-collar roles once thought to be the exclusive domain of human beings could also end up on the chopping block. The first to go, according to the experts Pew surveyed, include paralegals, bookkeepers, transcriptionists, and medical secretaries. The widespread use of DIY tax and finance software and automatic transcription tools like Siri only hints at the changes to come in these sectors. The important thing to note is that these jobs aren’t just repetitive mechanical functions. They require an ability to learn and adapt to new information. And this is precisely why the coming AI revolution is so scary.

I’ve seen how quickly new roles can appear and disappear even in my own sector, social media. Just a few years ago, “social media manager” was one of the most in-demand job functions on the career site Indeed.com. Then social media management tools—including those made by my company, Hootsuite—became more widespread and easy to use. Social media use has increased exponentially since then, but demand for dedicated social media managers hasn’t kept pace. This is still a critical role in large organizations, but for many businesses, ever more sophisticated technology has transformed social media from a discrete job into something that people all across an organization can do.

Clearly, we’re still in the early days of anticipation, and until AI comes into more widespread everyday use, there will be plenty of room to debate the jobs issue. Won’t the automation of low-tech roles ultimately lead to more high-tech ones? Just as in the past, new jobs—and entirely new sectors—will no doubt emerge. What’s unclear is whether these new positions will offset the loss of the old ones. Take Uber drivers. At the end of last year, Uber had more than 160,000 active drivers. When robots ultimately take the wheel, someone is still going to have to handle coordination, programming, and servicing for Uber. But that workforce will presumably be tiny compared to the one that’s currently employed.

Extend that kind of downsizing across entire industries, and the scale of the problem becomes apparent. A 2013 University of Oxford study concluded that the combined advances in computers, automation, and AI could put up to 47% of U.S. jobs at risk within the next two decades alone. Roles that require an advanced skill set will still be safe. And there will still be jobs at the bottom of the economic ladder, those that require little training and involve non-routine, service-type tasks. That still leaves whole sectors of the economy—in particular, those that employ the middle class—that could be hollowed out.

Queue the latest dystopian Hollywood blockbuster of your choice—Hunger GamesSnowpiercerDistrict 9Elysium, etc. Are those fictive visions anything like what’s actually in store? One thing the latest sci-fi movies are right to note is the Pandora’s box of social ills that opens up when well-paid jobs are scarce—from (yet more) income inequality and social unrest to increasingly repressive governments and the growth of a permanent, marginalized underclass that’s excluded from participating in the economy.

Or not. Right now it seems premature to start evoking scenes like these, and history has no shortage of pessimists whose dire predictions now look pathetically wrong. Writing in 1798, Thomas Malthus famously predicted that since population multiplies “geometrically” while food supply grows “arithmetically,” the human race faced an imminent future of famine and disease. What he failed to take into account, of course, was how new technologies would lead to exponential increases in crop yields and advances in medicine.

So should we panic, or sit back and let the robots do their thing? What seems abundantly clear is that the nature of work is changing. The same jobs that support millions of people today may not be here in 10 or 20 years. The most sensible option would seem to be to start taking steps now to prepare for future job displacement. But how?

Rethinking education in an automated world

The traditional answer has been to invest in developing skills that machines can’t replicate—creativity, problem solving, ingenuity, and other higher-order functions. Interestingly, embracing these skills means taking a step back from the idea of the human being that emerged during the Industrial Revolution—cog in a machine, interchangeable, and reproducible—towards the older Renaissance humanism, more prone to seeing people as possessed with unique gifts to create and innovate.

The problem is that public education in the U.S. and much of the world is, in many ways, a by-product of the Industrial Revolution. Education came to be standardized just like production, with students lined up in neat rows of desks and taught a uniform curriculum. An emphasis on memorization and rote learning helped produce a uniform citizenry—literate, compliant, interchangeable—to fill standardized roles in industry, offices, and government.

Image by Richard under CC BY 2.0

None of that cuts it in an age when intelligent machines can do anything rote or repetitive far better than we can. Cultivating some of our last uniquely human abilities—namely creativity and social intelligence—requires reimagining education as a means not of reproducing uniformity but of nurturing exceptionalism. In other words, the ability to do things that can’t be codified or systematized. The kind of lateral thinking, autonomy, imagination, and creativity prized in alternative education models like Waldorf and Montessori would need to be brought to the forefront. A focus on accepting facts and internalizing codes would need to be replaced by emphasis on questioning, theorizing, and, well, dreaming.

On one level, this sounds great. Let’s leave the drudgery for the machines. Let’s take back ideas, art, and creativity in a kind of modern techno-utopian Renaissance. But here’s the thing: That might not be enough.

Promoting creativity and encouraging independent thinking might help us stay ahead of job losses in the short term. But in the long term, advanced robots may well be able to execute even some of these uniquely “human” functions better than we can. Here we’re getting into the realm of “strong” or “full” AI—machines that aren’t just able to learn basic tasks but can master pretty much anything. If you’re a futurist, this is when talk of the “singularity” comes into the picture—the moment when computers can make themselves smarter, leading to capabilities that match, and then quickly exceed, our own.

Estimates for when we’ll approach this kind of capacity vary widely. But we’re creeping closer all the time. The day when robots replace high-skill human jobs may well be centuries off. Or it could be, relatively speaking, just around the corner. “The central question of 2025,” insists GigaOM lead researcher Stowe Boyd, “will be: What are people for in a world that does not need their labor, and where only a minority are needed to guide the ‘bot-based economy?”

To that, I’d add a few corollaries: How do we keep the economy humming when jobs themselves have grown obsolete? How do people support themselves? And what does it mean to be a productive member of society in a post-job world?

Radical solutions to mass unemployment

The scale of this problem may require some radical, even counterintuitive solutions, like giving money away. A growing chorus of tech cognoscenti, from all-star investor Marc Andreessen to Barack Obama’s onetime director of analytics Jim Pugh, have espoused the idea of a “living income.” Not welfare or charity, living income is a stipend—roughly enough to live on and with few frills—paid to every adult in the country, whether they’re working or not. In the U.S., numbers thrown around have averaged from $15,000 to $20,000 per adult per year.

Let’s get past the obvious reactions to this idea: that giving away money is crazy, that the whole scheme would permanently warp the economy, and so on. Why might the concept of living income actually make sense? For starters, in a world where AI and robotics have made unemployment the norm, not the exception, people still need to eat. They still need to support families. More important still, they need a reason to remain invested in the idea of society. Leaving the masses displaced by new technology to their own devices—jobless and destitute—is hardly a recipe for a bright future.

Living income also allows us to keep the wheels of the economy and innovation turning. “A fundamental insight of economics is that an entrepreneur will only supply goods or services if there is a demand, and those who demand the good can pay,” writes Center for Internet and Society expert Andew Rens. In the new millennium, technology has generated enormous wealth for innovators and entrepreneurs. This has fed a virtuous cycle, with returns invested in developing newer and better technologies. (This same cycle, it should be said, has also had the not-so-virtuous effect of concentrating wealth in ever fewer hands.) But the whole process grinds to a halt in the absence of consumers. Progress depends, in no small way, on people buying stuff. And that depends on them having an income.

Interestingly, the living-income concept has its adherents on both sides of the political spectrum. Back in the day, both Martin Luther King, Jr. and Richard Nixon supported variations on the idea. Today, corporate-friendly libertarians—of the Charles Koch variety—see it as a way to replace myriad government handouts with one flat, transparent payout. Progressives, meanwhile, view living income as a means to level the playing field and safeguard basic rights and dignities.

Funding it, of course, could get a bit tricky. One estimate pegs the cost of providing living income in the U.S. at $4.38 trillion, more than the entire $3.5-trillion federal budget. Shifting resources from other social welfare programs could help, as could taxes on income earned in excess of the minimum.

It won’t be easy by any measure, but living income isn’t completely without precedent. In the 1970s, a five-year basic income program in the Canadian province of Manitoba called Mincome showed promising results. Parents spent more time raising children. Students showed higher test scores and lower dropout rates. Hospital visits, mental illness, car accidents, and domestic abuse cases all declined. And in the end, total working hours only slipped by a few percentage points. In other words, having a basic income didn’t lead to sloth or indolence. It let people spend time on the things that mattered: family, education, health, personal fulfillment.

If the robots do take our jobs one day—but give us back some of those things in return—it might not be such a bad trade after all.