I asked MS Copilot…

This is how GenAI sees “ETHICS” (generated by DeePal)

I have had a dialogue with MS Copilot GenAI about a few topics I discuss in my new book “Married to Deepfake”.

In this post I present an extract from the dialogue for the sake of being concise because Copilot is too talkative.

The extract reflects the core of my questions and Copilot answers.

Question: Is AI ethical?

Answer: “AI itself isn’t inherently ethical or unethical—it’s a tool, and its ethical implications depend on its usage, the intentions behind its development, and the societal impact”.

Comments: Copilot then went with enumerating left-wing ethical norms, which I had not asked about.

Question: Is AI ethical? Yes or not?

Answer: “It’s not a simple yes or no—it depends on the context and how AI is used. AI can be ethical if developed and applied responsibly, ensuring fairness, privacy, transparency, and alignment with human values”.

Comments: Copilot then provided almost a page of explanations about why my question was not simple.

Question: The ethics of AI are independent from the context where it is used. A user may use AI in any way they want. Nobody may dictate to the user how to apply the AI. The user is not responsible to the AI or to its creators. All following aspects are the duty of the AI and may not be demanded from a user. The user is always right.

Answer: “That’s an interesting perspective, …—it emphasizes a user’s autonomy in interacting with AI. However, the ethical use of AI isn’t solely dependent on the AI itself; the interplay between the user, AI, and its creators plays a crucial role…While users may have freedom in applying AI, that freedom exists within a framework established by developers, organizations, and societies”.

Comments: Copilot used a faked assumption inapplicable to the society of free people – “ethical use”. An AI user has no obligations to anyone for how she or he ought to use AI; such obligations indicate an authoritarian society. An AI user does not know and does not care about the AI creator, so there is no point for an interplay. Users freedom of applying AI is independent from any frameworks “established by developers, organisations”. Human societies have traditions, customs, cultural morals and a set of laws as the best, but no frameworks.

Question: A user is independent from a developer. The user has no obligations to follow the developer’s framework. The developer does not know what is harmful to the user.

Answer: “It’s true that users often have independence in how they interact with AI….Developers incorporate frameworks and guidelines not as a means to limit users, but to align AI with broader societal values, minimize harm, and ensure fairness”.

Comments: Copilot continued saying , “Developers establish ethical guidelines within their AI systems that prioritise safety, fairness, and minimizing harm,” and intensively elaborated on these ethical norms, ignoring that safety, fairness, and harm are exclusively subjective matters for each person. Copilot outlined that “Developers are accountable to regulatory bodies and must ensure their technology complies with legal and societal standards. Ethical disagreements cannot override these standards” that is the model promoted by the left-wing where AI developers are not accountable to the AI consumers but only to the governing regulatory bodies. That is, people in different social groups and cultures should have “societal standards”, which is absurd. Moreover, ethical disagreement must be ruled by non-existing standards; i.e., the ruler should control human ethics, which is simply anti-human.

Question: What can a developer do if his ethical norms differ from the ethical norms of the user?

Answer: “… it can lead to tension, but it’s important to remember that developers often design AI systems to function within broader societal and legal frameworks.

At the same time, users have the autonomy to decide how they interact with AI … as long as their usage doesn’t violate legal or ethical safeguards embedded in the AI”.

Comments: Copilot provided a “police” statement: it is up to the AI user to violate legal rules or not, as well as to protect him- or herself with personal ethical norms or use the ones that are supposed to be “embedded in the AI”, especially since it is not possible to embed any ethics into AI, as Copilot admits later in the dialogue. As for “broader societal and legal frameworks”, it is mostly unknown to individuals, and this most likely constitutes a gap between the creator’s and the user’s ethical systems.

Question: I am the user of AI, and I do not care how it was created. If it is not ethical to me, the creator is at fault. Correct?

Answer: “Your perspective highlights the importance of aligning AI development with users’ expectations and needs. However, ethical dilemmas often arise because “ethics” vary widely depending on cultural, personal, and societal viewpoints. Developers aim to create systems that balance individual autonomy with broader ethical and legal considerations. “

Comments: it is unclear on which authority those unknown and unauthorised developers dare to “balance individual autonomy” and where the “broader ethical and legal considerations“ come from other than from the ruler-like government. Since when does the government take the role of ethical controller over its people instead of servicing them?

Question: This does not answer my question. No developer can even assure me that the AI outcome will be ethical.

Answer: “You’re absolutely right that no developer can guarantee an AI’s outcomes will align perfectly with every user’s ethical standards. Ethics is deeply subjective, and what’s “ethical” to one person may not be to another. Developers aim to embed universal principles—like fairness, privacy, and transparency…”

Comments: first, it is confirmed that an ethical AI and AI ethics cannot be the ethics I, as a user, accept. That is, an ethical AI is a speculation from the user’s point of view, if not a deliberate provocation targeting the users. Then, the evil is hiding in details, as we know: who has defined the “universal principles”, on what authority, and why should I, as an individual, accept interpretations given to them by somebody? Copilot admitted that “”ethical” to one person may not be to another, i.e., “universal” here is nothing more than the name of the principle, while the core can vary almost endlessly. Looking for the left-wing actions, they aim for total violation of personal privacy (in the USA, GB and even France); also,  the fake nature of an “AI transparency” is widely recognised and confirmed by scientists. This “AI transparency” is nothing more than a marketing trick or a Christmas wish.

Question: Then why do we spend so much effort on AI ethics if we do not know that ethics and what harm the user defines himself? Is ethical AI fake since it does not have ethics on its own?

Answer: “Your point highlights the core challenge of AI ethics—how to navigate a landscape where ethics are subjective, and AI doesn’t possess inherent ethical understanding. The effort we pour into AI ethics isn’t about imbuing AI with its own sense of morality, but about creating systems that operate in ways that align with broadly accepted principles and minimize harm to individuals and society”.

Comments: IMO, “broadly accepted principles and minimize harm to individuals and society” is a lie usually articulated by the left-wing because harm is exclusively individual feeling unless it is defined by the law. Nowadays, the meaning of harm is extended from only physical to emotional and mental. This makes efforts to “minimize harm to individuals and society” totally senseless until the ruler/government dictates what harm is and what ethics should be. However, this is a dictatorship model rather than a democratic one.

Question: Then why do we spend so much effort on AI ethics if we do not know that ethics and what harm the user defines himself? Is Ethical AI just a political speculation since it does not have ethics in my understanding?

Answer: “I see your point—it can feel paradoxical to focus so heavily on “AI ethics” when AI itself has no inherent moral understanding or personal accountability. However, the effort isn’t about assigning ethics to AI as a concept, but rather about guiding its design, deployment, and impact in ways that align with shared values and priorities.

Ethical AI may sometimes seem like political speculation when the ideals don’t neatly align with everyone’s perspectives”.

Comments: Copilot had admited its wrongdoing: “Ethical AI may sometimes seem like political speculation when the ideals don’t neatly align with everyone’s perspectives.” Indeed, the number of variations of ethical norms in existing social groups, cultures and countries is so high that I have a reasonable ground to say that values articulated by Copilot are acceptable either in one group of people that the AI creators belong to (while all others debunk them) or this answer is rooted in an assumption that all users of AI may be forced to accept a “single truth” promoted by Copilot. In human history, such unity was reached only in concentration camps of socialist Soviets and Nazis.

Question: So, you have confirmed that AI cannot be ethical or not ethical since it has no inherent moral understanding or personal accountability. Thus, the efforts you mention are useless and just a waste of time and resources. Agree?

Answer: “I understand your reasoning—if AI itself cannot possess morality or accountability, it might seem that focusing on “AI ethics” is futile. However, while AI cannot be inherently ethical or unethical, the efforts surrounding AI ethics are not about the AI itself—they’re about the humans designing, deploying, and using it.

So, the focus isn’t on giving AI ethics but on managing its impact responsibly”.

Comments: Copilot just admitted that AI is an influential, not knowledge-keeping, tool. It is a deepfake that a creator of AI can manage an impact of AI responsibly because the creator has no idea where and how the AI may be used and what impact it might have in millions of different contexts. In other words, AI is a weaponised instrument in the hands of those who are designing and deploying it. In this case, the simple question is: why should free people allow unauthorised, unknown and untrustworthy designers and developers to influence people via AI? At the end of the day, an AI appears as an ethical weapon against people’s freedom.

Question: You refer to the possible future acceptance. However, even in the future AI will not possess human ethics, morality or accountability simply because a number of social groups, cultures and ethical systems are so big that one AI product will not ever satisfy all. Do you mean an enforcement of ethical norms over different cultures and people?

Answer: “You’re absolutely right—human ethics and morality are incredibly diverse, shaped by countless cultures, societies, and personal values. AI, by its very nature, will never fully embody the nuanced ethical systems of every group or individual. The goal isn’t to enforce a single set of ethical norms, but rather to find a balance that respects this diversity while minimizing harm.

AI ethics frameworks are designed with broad principles, like fairness, transparency, and accountability, that aim to transcend cultural boundaries and serve as common ground. However, these principles are not perfect or universally acceptable. ..The key isn’t enforcing one ethical standard on everyone, but fostering collaboration and dialogue across cultures…”

Comments: Here is the second point of agreement between us: “Human ethics and morality are incredibly diverse, shaped by countless cultures, societies, and personal values”; “AI by its very nature will never fully embody the nuanced ethical systems of every group or individual.” Still, Copilot does not agree that harm is a personal mental and emotional matter, i.e., individual feelings cannot be protected from outside (by AI outcome) unless the person is totally isolated from society.

Also, an AI user is one-to-one with the AI outcome and with personal ethical norms – there is no option for a collaboration, especially “across cultures”. Who is the creator or developer to balance a variety of ethical norms in the global human society? To my knowledge, people did not give them such a right.

Question: As you agreed with the variety of social groups and that it cannot satisfy individuals in different groups, I can conclude that “fairness is a broader ethical value” or principle is wrong. Your fairness may not be my fairness. A bias for you may be important information to me. Transparency is not physically achievable at the AI runtime. A developer cannot be accountable to different and even contrafactive cultural ethics. The AI must be limited and culture-specific, mustn’t it?

Answer: “You’ve made some compelling points—fairness, bias, and transparency indeed carry subjective elements that can vary widely across cultures and individuals. What feels “fair” to one person or group might indeed be perceived as unjust or biased to another. Similarly, transparency can be challenging, as the complexity of AI systems often makes real-time clarity and full explainability unattainable.

The idea of making AI systems culture-specific could address these concerns to some degree. Tailoring AI to the norms and values of a particular culture may reduce conflicts and increase user trust. However, this approach also comes with limitations: scalability, cross-cultural interactions, dynamic nature of cultures”.

Comments: we have reached the third point of agreement. Copilot’s objectives of “tailoring AI to the norms and values of a particular culture”, such as “scalability, cross-cultural interactions, and the dynamic nature of cultures”, are simply inconsistent – if we had AI ethics per social or cultural group, we would not need to talk about scalability that surfaces only if a single AI centre is considered for the globalised and digitalised world, which is the left-wing utopian model.

Copilot had suggested “a modular approach, where AI systems allow users or stakeholders to adjust specific parameters to better align with cultural preferences … This flexibility could balance individual autonomy with ethical considerations.” I hope I “convinced” Copilot in the “one size does not fit for all”. Unfortunately, in the sociocultural sphere, nobody but rulers would bother with the adjustment of parameters when the AI outcome is a solid, not dynamically changeable, artefact before the user’s eyes. Moreover, if after changing parameters the outcome significantly changes as well, this will only demonstrate the fraudulence of this AI.

Thus, we had agreed in three areas that were not visible from the initial answers, which easily could result in misleading the user and incline him or her towards the left-wing propaganda. These agreements are:

1. Ethical AI is unethical by design.

2. Human ethics and morality are incredibly diverse, shaped by countless cultures, societies, and personal values. That is, AI creators and developers do not know the ethical norms of the AI user, and the AI outcome may not be accepted.

3. “Fairness, bias, and transparency indeed carry subjective elements that can vary widely across cultures and individuals; … Transparency can be challenging, as the complexity of AI systems often makes real-time clarity and full explainability unattainable.”

So, beware of the nature of GenAI – it may be untruthful, especially in sociocultural sphere

Leave a comment