I think we need to start here,:mrgreen:
A robot may not injure a human being or, through inaction, allow a human being to come to harm.
A robot must obey orders given it by human beings
except where such orders would conflict with the First Law.
A robot must protect its own existence as long as such protection
does not conflict with the First or Second Law.
Seriously, AI can be broken down into several basic levels, the first is in use and has been
effectively in use for several centuries, the Expert system.
The Expert system has a collection of examples from experts of how they responded to a given situation.
it is a spinoff of a General Staff, who can function as if the General were still alive, (even when he is not).
In Modern times NASA used interviews with hundreds of Apollo Scientist, Astronauts, and Engineers, to create
the launch control expert system. The system that said NOT to launch Challenger, but was overridden.
The next level is self learning, and we are still working out the bugs.
Most of the Science fiction says that at some point self learning systems will become sentient,
some become benevolent servants, some become toddlers who throw tantrums.