I know you don’t have time for me to beat around the piƱata with my pool noodle. You’re in a hurry. You’re itching to turn paragraphs into paychecks. So let’s get right to it. Here’s what you gotta do to type your way to prosperity.
1. Don’t read
You’re a writer. And what do writers do? They write. Readers, meanwhile, are just the suckers who will pay you money. And you’re not a sucker. So don’t read.
2. Focus on what you see others making money with
Does Cormac McCarthy make money? Does J.K. Rowling? Does George R.R. Martin? Does Haruki Murakami? Does Jane Austen, Mark Twain, and William Shakespeare? Well, we don’t know. I’ve never seen them share their stats. They never share stories telling us how much they make. So this can only mean one thing. They don’t make much at all. They are quite obviously so ashamed about how little they make that they just don’t want to talk about it. Even the dead ones don’t have an excuse. Money existed back then. So why didn’t they write about it?!
But do you know who makes money as a writer? Of course you do! It’s those who keep telling you about it. They are your role models. Follow in their…
Kurzweilis a world class inventor, thinker, futurist, and author of The Singularity Is Nearer. He has been a leading developer in artificial intelligence for 61 years
In early 2023, following an international conference that included dialogue with China, the United States released a “Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy,” urging states to adopt sensible policies that include ensuring ultimate human control over nuclear weapons. Yet the notion of “human control” itself is hazier than it might seem. If humans authorized a future AI system to “stop an incoming nuclear attack,” how much discretion should it have over how to do so? The challenge is that an AI general enough to successfully thwart such an attack could also be used for offensive purposes.
We need to recognize the fact that AI technologies are inherently dual-use. This is true even of systems already deployed. For instance, the very same drone that delivers medication to a hospital that is inaccessible by road during a rainy season could later carry an explosive to that same hospital. Keep in mind that military operations have for more than a decade been using drones so precise that they can send a missile through a particular window that is literally on the other side of the earth from its operators.
We also have to think through whether we would really want our side to observe a lethal autonomous weapons (LAW) ban if hostile military forces are not doing so. What if an enemy nation sent an AI-controlled contingent of advanced war machines to threaten your security? Wouldn’t you want your side to have an even more intelligent capability to defeat them and keep you safe? This is the primary reason that the “Campaign to Stop Killer Robots” has failed to gain major traction. As of 2024, all major military powers have declined to endorse the campaign, with the notable exception of China, which did so in 2018 but later clarified that it supported a ban on only use, not development—although even this is likely more for strategic and political reasons than moral ones, as autonomous weapons used by the United States and its allies could disadvantage Beijing militarily.
Further, what will “human” even ultimately mean in the context of control when, starting in the 2030s, we introduce a nonbiological addition to our own decision-making with brain–computer interfaces? That nonbiological component will only grow exponentially, while our biological intelligence will stay the same. And as we get to the late 2030s, our thinking itself will be largely nonbiological. Where will the human decision-making be when our own thoughts largely use nonbiological systems?
Instead of pinning our hopes on the unstable distinction between humans and AI, we should focus on how to make the AI systems safe and aligned with humanity’s wellbeing. In 2017, I attended the Asilomar Conference on Beneficial AI—a conference inspired by the successful biotechnology safety guidelines established at the 1975 Asilomar Conference on Recombinant DNA—to discuss how the world could safely use artificial intelligence. What resulted from the talks are the Asilomar AI Principles, some of which have already been very influential with AI labs and governments. For example, principle 7 (Failure Transparency: “If an AI system causes harm, it should be possible to ascertain why”) and principle 8 (Judicial Transparency: “Any involvement by an autonomous system in judicial decision-making should provide a satisfactory explanation auditable by a competent human authority”) are closely reflected in both the voluntary commitments from leading tech giant in July 2023, and in President Biden’s executive order several months later.
Efforts to render AI decisions more comprehensible are important, but the basic problem is that, regardless of any explanation they provide, we simply won’t have the capacity to fully understand most of the decisions made by future superintelligent AI. If a Go-playing program, for instance, far beyond the best human were able to explain its strategic decisions, not even the best player in the world (without the assistance of a cybernetic enhancement) would entirely grasp them. One promising line of research aimed at reducing risks from opaque AI systems is “eliciting latent knowledge.” This project is trying to develop techniques that can ensure that if we ask an AI a question, it gives us all the relevant information it knows, instead of just telling us what it thinks we want to hear—which will be a growing risk as machine-learning systems become more powerful.
The Asilomar principles also laudably promote noncompetitive dynamics around AI development, notably principle 18 (AI Arms Race: “An arms race in lethal autonomous weapons should be avoided”) and principle 23 (Common Good: “Superintelligence should only be developed in the service of widely shared ethical ideals, and for the benefit of all humanity rather than one state or organization.”). Yet, because superintelligent AI could be a decisive advantage in warfare and bring tremendous economic benefits, military powers will have strong incentives to engage in an arms race for it. Not only does this worsen risks of misuse, but it also increases the chances that safety precautions around AI alignment could be neglected.
It is very difficult to usefully restrict development of any fundamental AI capability, especially since the basic idea behind general intelligence is so broad. Yet there are encouraging signs that major governments are now taking the challenge seriously. Following the international AI Safety Summit 2023 in the UK, the Bletchley Declaration by 28 countries pledged to prioritize safe AI development. And already in 2024, the European Union passed the landmark EU AI Act regulating high-risk systems, and the United Nations adopted a historic resolution “to promote safe, secure and trustworthy artificial intelligence.” Much will depend on how such initiatives are actually implemented. Any early regulation will inevitably make mistakes. The key question is how quickly policymakers can learn and adapt.
One hopeful argument, which is based on the principle of the free market, is that each step toward superintelligence is subject to market acceptance. In other words, artificial general intelligence will be created by humans to solve real human problems, and there are strong incentives to optimize it for beneficial purposes. Since AI is emerging from a deeply integrated economic infrastructure, it will reflect our values, because in an important sense it will be us. We are already a human-machine civilization. Ultimately, the most important approach we can take to keep AI safe is to protect and improve on our human governance and social institutions. The best way to avoid destructive conflict in the future is to continue the advance of our ethical ideals, which has already profoundly reduced violence in recent centuries and decades.
AI is the pivotal technology that will allow us to meet the pressing challenges that confront us, including overcoming disease, poverty, environmental degradation, and all of our human frailties. We have a moral imperative to realize the promise of these new technologies while mitigating the peril. But it won’t be the first time we’ve succeeded in doing so.
When I was growing up, most people around me assumed that nuclear war was almost inevitable. The fact that our species found the wisdom to refrain from using these terrible weapons shines as an example of how we have it in our power to likewise use emerging biotechnology, nanotechnology, and superintelligent AI responsibly. We are not doomed to failure in controlling these perils.
Overall, we should be cautiously optimistic. While AI is creating new technical threats, it will also radically enhance our ability to deal with those threats. As for abuse, since these methods will enhance our intelligence regardless of our values, they can be used for both promise and peril. We should thus work toward a world where the powers of AI are broadly distributed, so that its effects reflect the values of humanity as a whole.
TIME Ideas hosts the world's leading voices, providing commentary on events in news, society, and culture. We welcome outside contributions. Opinions expressed do not necessarily reflect the views of TIME editors.