A lot of people think that predictions of when AI will achieve certain milestones are important. Given the rapid rate of AI progress and the lack of unified safety standards, I’m inclined to agree. We need a deadline for safety and ethics standards for AI that can clearly be communicated to industry, regulators, and the public. Put another way, by the time a harmful AI is developed, it’d be best if the generals in charge of our nuclear arsenal aren’t caught off guard.
The AGI prediction community needs an IPCC report
The AGI prediction community needs an IPCC…
The AGI prediction community needs an IPCC report
A lot of people think that predictions of when AI will achieve certain milestones are important. Given the rapid rate of AI progress and the lack of unified safety standards, I’m inclined to agree. We need a deadline for safety and ethics standards for AI that can clearly be communicated to industry, regulators, and the public. Put another way, by the time a harmful AI is developed, it’d be best if the generals in charge of our nuclear arsenal aren’t caught off guard.