A lot of people think that predictions of when AI will achieve certain milestones are important. Given the rapid rate of AI progress and the lack of unified safety standards, I’m inclined to agree. We need a deadline for safety and ethics standards for AI that can clearly be communicated to industry, regulators, and the public. Put another way, by the time a harmful AI is developed, it’d be best if the generals in charge of our nuclear arsenal aren’t caught off guard.
Good essay! I'd suggest though that the question IPCC tried to answer was better defined than what AI folk are trying to answer/ predict, which makes the latter harder. Which makes the clarification in terms of some common standards even more necessary.