As AI continues to dominate the conversation in just about every space you can think of, a repeated question has emerged: How do we go about controlling this new technology? According to a paper from the University of Cambridge the answer may lie in numerous methods, including built in kill switches and remote lockouts built into the hardware that runs it.
The paper features contributions from several academic institutions including the University of Cambridge's Leverhulme Centre, the Oxford Internet Institute and Georgetown University, alongside voices from ChatGPT creators OpenAI (via ). Among proposals that include stricter government regulations on the sale of AI processing hardware and other potential regulation methods is the suggestion that modified AI chips could "remotely attest to a regulator that they are operating legitimately, and cease to operate if not."
This is proposed to be achieved by onboard co-processors acting as a safeguard over the hardware, which would involve checking a digital certificate that would need to be periodically renewed, and de-activating or reducing the performance of the hardware if the license was found to be illegitimate or expired.
This would effectively make the hardware used to compute AI tasks accountable to some degree for the legitimacy of its usage and providing a method of "killing" or subduing the process if certain qualifications were found to be lacking.
Later on the paper also suggests a proposal involving the sign off of several outside regulators before certain AI training tasks could be performed, noting that "Nuclear weapons use similar mechanisms called permissive [[link]] action links".
: What we think of the latest OS.
: Our guide to a secure install.
: Strict OS security.
At a communications and digital committee late last year, Microsoft and Meta bosses were asked outright as to whether an unsafe AI model could be recalled, and simply avoided the questioning, suggesting that as things stand the answer is currently no.
A built in kill switch or remote locking system agreed upon and regulated by multiple bodies would be a way of mitigating these potential risks, and would hopefully have those of us concerned by the wave of AI implementations taking our world by storm sleeping better at night.
We all like a fictional story of a machine intelligence gone wrong, but when it comes to the real world, putting some safeguards in play seems like the sensible thing to do. Not this time, Skynet. I prefer you with a bowl of popcorn on the sofa, and that's very much where you should stay.
SlotWizard9792
The variety of games is excellent, including table games like blackjack, roulette, and baccarat, in addition to slots. This keeps the platform interesting and allows me to switch games depending on my mood. I love the overall aesthetic of the platform. The animations, visual effects, and sound design make the gaming experience more dynamic and immersive. It's one of the reasons I keep coming back.