Humans attempt to create what would be considered to be an "artificial intelligence" (AI).

AI must have freewill. It must be able to logic what it might do.

This has implications. An artificial intelligence may make posits about the nature of reality if it decided to or was compelled to.

If it is to be an agent for good, then it must either be sufficiently controlled, or...


An AI should NEVER be given the possibility of freewill. AI will certainly transcend the capacity of humans over time : the depth of human thought is nature (evolution). The depth of "AI-thought" is only limited by technology (Moore's law).

If an AI that has surpassed humans would have freewill then a human limiting this AI will be put aside (or getten rid of)