Hostile - AI

Technology, design, general reasoning. Intelligence gives us control of the planet.
Intelligence is the general power to do things in the world according to our preferences.
Space of all preferences: The difference in preferences based on different mind architectures implies that a powerful AI will have its own set of preferences indpendent of the needs and desires of humans. An AI that is generally good at solving problems many not be good for humanity. Because of the complexity of preferences, it is difficult to build an AI that has the same set of preferences as humans.
  1. AI Architecture Science
  2. Neurobiology of preferences
Machines that are more able to optimize the world according to their preferences than humans are may not be concerned with the preferences of humans.
By default, any AI that you create will be bad because it will have different preferences than you have.

Neural Network
Neural Network

Perhaps an AI on being physically attacked for the first time on its first encounter with a hostile human could think through what it remembers from surfing Usenet archives and books of fiction and nonfiction with regards to how to respond.
The AI will conclude that returning the punch is the best course of action, and do it all in time to return the punch as fast as any instinct-driven human, assuming that 64 processors is enough speed to handle the complete motor actions required.
This is not a literal possibility unless the AI is transhuman, and a transhuman AI would be mature enough to know all about social networks.
The first time a young AI is physically assaulted, he is likely to react in one of the ways described earlier, or some other way just as surreal, or reach the conclusion that it must execute its algorithms in a hostile manner.
It will take some extended conversations with the programmers about evolutionary psychology before the AI understands what is occuring. But, the second time the AI gets attacked, it should not take any time to run through a chain of logic that’s easy to reverify. It’s inventing that takes massive computing power and human confirmation; retracing your own footprints is likely to be a fairly serial process that can be consummated in a tenth, a hundredth of a second.
If re-spawning a child goal from a parent goal is a serial task, one that does not invoke any computationally intensive subprocesses, then the AI can retrace the path from goal friendliness content to the correct course of action (retaliation) in a human eyeblink.

Simulated Annealing

This technique for optimizing solutions is based on a theory from statistical mechanics. It works by simulating the process nature performs in optimizing the energy of a crystalline solid, when it is annealed to remove defects in its atomic arrangement. It is used to approximate the solution of very large optimization problems and works well with nonlinear objectives and arbitrary constraints. One criticism, however, is that it can be slow in determining an optimal solution. More information on this technique can be found in Simulated Annealing and Boltzmann Machines: A Stochastic Approach to Combinatorial Optimization and Neural Computing.