The U.S. government, and all other governments, should regulate the development of SMI. In an ideal world, regulation would slow down the bad guys and speed up the good guys — it seems like what happens with the first SMI to be developed will be very important. I think it’s definitely a good thing when the survival of humanity is in question.
Altman wants regulations to have a system for measuring the benefit of use or training of machine intelligence as well as external review of its capabilities:
For example, beyond a certain checkpoint, we could require development [to] happen only on airgapped computers, require that self-improving software require human intervention to move forward on each iteration, require that certain parts of the software be subject to third-party code reviews, etc.