Here is a look at gold’s fake breakdown and “Super-Intelligent” AI.

Gold’s Fake Breakdown
October 5 (King World News) – 
Graddhy out of Sweden:  At present, Gold has a huge shakeout below important yellow breakdown line in the making.

Gold’s Huge Shakeout Below $1,673 Reversed

As said, more often than not, the 1st break out of an important coil is a false break out. So also this time, as suspected, since Gold is now back well above 1673 yellow level. Now we want to see several weekly closes above that 1673

To find out which gold & copper explorer just hit significant mineralization click here or on the image below

“Super-Intelligent” AI
Gerald Celente:  
A “super-intelligent” artificial intelligence (AI) very likely would become so capable that humans would no longer be able to direct or contain its behavior, several researchers have warned in a study published in the Journal of Artificial Intelligence.

Odds are that such an AI eventually will be created, the group said.

To control it, scientists would have to create a prototype of it to analyze and test various methods of control—but to create it and turn it on to observe its behavior would almost certainly release it “into the wild” with unknown consequences, the scientists said.

The control problem can’t be solved by creating a “lite” version of the program, because artificial intelligence is already teaching itself things in ways that engineers don’t understand. A lesser version of a super-intelligence could easily, and quickly, evolve out of control. 

Rules such as “don’t harm living things” or “don’t disrupt the economy [power grid, military infrastructure]” would not be guaranteed to work because the AI could evolve to become autonomous and make decisions contrary to those instructions.

Once AI is operating on a level beyond human comprehension, how can we get inside it to tell it what to do?  We couldn’t because we won’t be able to understand how the AI is making decisions or what guidelines it decides to set for itself, the scientists argue.

“A superintelligence is multi-faceted, and therefore potentially capable of mobilizing a diversity of resources in order to achieve objectives that are potentially incomprehensible to humans, let alone controllable,” they wrote.

In part, the team viewed the problem through Alan Turing’s 1936 “halting problem.”

The problem is in knowing if a computer will reach a solution to a given problem and halt work, or keep searching indefinitely seeking an answer always just beyond its grasp.

Turing proved mathematically that it’s impossible to find a way to know that there’s an answer for every possible computer program that could be written, which, the paper’s authors say, means there’s no guarantee that a super-intelligent AI could be subject to human control.

There already are AI programs that learn on their own and can solve mathematical and logical problems beyond humans’ mental abilities. These problems usually have no practical application, such as solving pi to the millionth decimal place.

However, as engineers keep designing more sophisticated and complex artificial intelligences, it will be impossible to predict exactly when to stop in order to prevent the inadvertent creation of a runaway super-intelligent AI. 

She’s Given Keynote Speeches To IMF, World Bank & Federal Reserve
To listen to Nomi Prins discuss the gold market, the emergency Bank of England intervention in the UK, crisis in Europe, what to expect next for the US dollar and much more CLICK HERE OR ON THE IMAGE BELOW.

To listen to Alasdair Macleod discuss the crisis unfolding in the UK and Europe as well as why the gold and silver markets are poised to rally CLICK HERE OR ON THE IMAGE BELOW.

© 2022 by King World News®. All Rights Reserved. This material may not be published, broadcast, rewritten, or redistributed.  However, linking directly to the articles is permitted and encouraged.