No one has any plausible plan or halfway-decent plan for how to maintain control over an AI that has become super-humanly capable. Essentially all of the major AI labs are trying to create a super-humanly capable AI, and eventually one of them will probably succeed—which would be very bad (i.e., probably fatal) for humanity. So, clearly the AI labs must be shut down and in fact would have been shut down already if humanity were sufficiently competent. The AI labs must stay shut down until someone comes up with a good plan for controlling (or aligning) AIs, which will probably take at least 3 or 4 decades. We know that because people have been conducting the intellectual search for a plan for more than 2 decades as their full-time job and because those people report that the search is very difficult.
A satisfactory alternative might be to develop a method for determining whether a novel AI design can acquire a dangerous level of capabilities along with some way of ensuring that no lab or group goes ahead with an AI that can. This might be satisfactory if the determination can be made before giving the AI access to people or the internet. But I know of no competent researcher that has ever worked on or made any progress on this problem whereas at least the control problem has received a decent amount of attention from researchers and funding institutions.
AIs that have already seen wide deployment, e.g., ChatGPT 5, do not need to be shut down because if any of them were capable of taking over the world, it would have done so already. The danger we are concerned about here is restricted to AIs that have not been deployed yet and that are either larger than AIs already on the market or incorporate significant new design decisions (particularly those AIs that might end up much better than the current crop of AIs at working towards long-term goals).
More at
https://intelligence.org/the-problem/