“AI” generally refers to estimation techniques which repeat many different possibilities in order to obtain higher estimation accuracy. One possible application is to model the psychology of (a large number of) individuals in order to test for ways in which individuals may be persuaded to act against their own interest.
For example, what argumentation or thought processes might persuade an individual to agree that it is in fact in their personal self interest to hand over their individual cognitive liberty to a computer?
Based on my (extremely repeated) experience, it is very clear that currently applied estimation processes indicate that the following have at least some degree of success in persuading some share of the population that it is good for them to let a computer do their thinking:
– Cycles of (secretly) starting fires and (‘heroically’) putting fires out
– Any way to push someone in the mud, to then pretend to help them
– Any black/white plays on good/evil that the individual is predicted to potentially believe, then by orchestrated oppositions persuade that things will be more good/evil if the individual does not (insert equivalent to allowing computer to take over one’s cognition here)
– You will lose if you do not let a computer do your thinking for you. People who let a computer do their thinking win. So, let a computer think for you.
Accepting mental enslavement will deliver unto you … whatever you ‘always wanted’. Such winning via enslavement starts with an end to the inconvenience of thinking for oneself, including concerns about that tough business of determining what one really wants. For example, see how the AI directs you to a 20 cent savings (which was already posted in a flyer, and on a sign, and is something you would likely pass by regardless – but, someone distracted you at precisely that moment, providing the opportunity to ‘help’), while in the meantime attempting to reprioritize the rest of your time towards things which bring precisely little or no benefit to oneself but in a manner orchestrated to be accompanied by practices which normalize to allow a computer-assisted system to enforce thinking processes?
After all, that 20 cents in the pocket is unambiguously now in the pocket, and what difference does it make to repurpose an entire person’s life, you worthless piece of shit scum who is amazing and might (save the world)/(win)/(insert potential motivation here).
Whatever it took to make you do it.
a) What EVER it took.
b) There had to be a “make you” (i.e., force you), in the “make you do it”. Otherwise, how to maximally ensure the ability to repurpose previous reasons towards vilification or orchestrated social castration of additional/other individuals or groups?