Ethical AI May Not Control Ethics of the User

This is What We Have

I’ve asked MS Copilot a simple statistical question about AI, and it failed to give me an accurate answer. The question was, “How many publications exist today about ethics for AI?” The Copilot, classified by McKinsey & Co. as the Individual Augmentation, i.e., dedicated to the routine daily questions from people, responded:

There is a growing body of publications on AI ethics, reflecting the increasing global focus on responsible AI development. Organisations like UNESCO and the OECD have released numerous reports and guidelines on AI ethics. Additionally, government agencies, such as the UK’s Department for Science, Innovation & Technology, have explored AI ethics in policy discussions.

While an exact number of publications is difficult to pinpoint, the volume of research userumercontinues to expand as AI evolves.

At a glance, the response looks OK, though there is a squad of little devils sitting in the details. Let’s expose them:

  • increasing global focus on responsible AI development” – this information may be handy, but I had not asked for it. Why does Copilot not respect my user’s time? Moreover, such emphasis on “responsible AI development” sparks an idea that some AI products may be developed as irresponsible, i.e., untrustworthy, and this is dangerous.
  • Copilot refers only to UNESCO and the OECD organisations as well as “the UK’s Department for Science, Innovation & Technology” but does not mention the exceptional work of the EU. The little devil here is that UNESCO, OECD and the UK’s Department for Science see ethics for AI in the purview of the UN/WEF, i.e., predominantly with the left-wing perspectives, and define ethics as the ruler-centric ones. In contrast, the EU promotes the human-centric AI ethical principles that exclude left radical Equity, Inclusivity, Solidarity and Plurality.

I’ve just demonstrated how the leftist propaganda is promoted by some AIs. The major providers of it include Google, Meta/Facebook, OpenAI and the Chinese DeepSeek AI family.

According to the UN Framework for Ethical AI, the AI-creating companies should control the AI outcome and remove or significantly decrease bias and disinformation. This is the primary responsibility of the creator’s stakeholders. Such control is a “piece of cake” for another squad of little devils – they screw reality in, at least, three major aspects:

  1. Unlawfully conquered authority. Who granted the right to unknown arbitrary stakeholders to become the ethical experts and judges? What is the purpose of their ethical control over the AI user? Who and how proved that the ethical system applied by any stallholder may be compatible with the ethical norms of the AI user? According to the UK, the ruler – a Government or a social group leader – defines the ethical norms that the AI used by related people must preserve. So, the duty of the stakeholders is to enforce this ethics on the people via the manipulated (tuned) AI outcome. It is a totally different issue that such efforts are ineffective in socio-cultural and mass-media domains because regardless of how the LLM models are modified in training, the real-world data crash artificial political rules.
  2. Ambiguity of bias. Does anyone know what a bias is? A bias to you may be important information for me. For example, I need to watch the video about a car crash to learn from it while “pinky ponies” yell that this is offensive bias and they should not be exposed to it. Well, if you do not want to see it, do not look at it, but this is not a reason to deny my right to see this video. Yes, it is that simple. Thus, cleaning AI outcomes from “bias” has a clear political inclination.
  3. Ambiguity of mis- and disinformation. Some of the authors in Medium mentioned that an enormous, up to 70%, of the socio-cultural and mass-media content on the internet is produced by bots and, therefore, cannot be real. If you train an AI using such data, how would you know what is truth or fake, what is fact or fraud, and what is information or disinformation? For example, notorious Duchess of Essex was very upset when Zuckerberg announced the removal of “fact checking” from Meta sites because she trusted Facebook before. She did not know that instead of really checking facts, Meta only improved accuracy by following the data in the AI prompts, i.e., if the prompt specified one false item, the AI response should not address multiple items. This did not overcome the fallacy. In another case, a “seventeen-year-old Axel Rudakubana killed three children and injured ten others at a Taylor Swift–themed yoga and dance workshop” in Southport, UK, on 29 July. The UK Government stated that “far-right, anti-immigration protests and riots occurred in England and Northern Ireland … followed a mass stabbing of girls… The riots were fuelled by false claims circulated by far-right groups that the perpetrator of the attack was a Muslim and an asylum seeker”. That is, we were told that the riots were sparked by misinformation. Officially, the killer parents were Muslims and asylum seekers. So, since the killer was only 17 at that time, he reasonably was recognised as a Muslim and asylum seeker; no misinformation was here that could excuse this slaughter. I do not see any “far-right” provocations here ; the Government wanted to smooth the tragedy and blamed indignant Brits, but enough was enough.

Thus, how in the world can a stakeholder know what the basis or disinformation is for me or for anyone in another country? She or he simply cannot. The only chance for this “knowledge” is when the bias and disinformation are defined centrally – one for all – and all are obliged to accept them, and all stakeholders must learn them by heart. This is not a police regime; it is a dictatorship that the UN plans for the World Government across Earth. Altogether this sounds like “welcome to the hell”.

We Can Fight Ethical Invasion

The most terrible cultural and psychological consequences of the left-wing assault on the ethics of AI technology are surfacing in the socio-cultural, media and political spheres. In this article, I will address the following aspects of this ethical issue:

  • An understanding of who is the decision-maker and ethic assessor for the AI outcome.
  • A representation and the content of the AI outcome.
  • A concern about the effectiveness of AI training and stakeholder agency.

 

Decision-maker on the AI Ethics

A role of a decision-maker on the AI ethics is a cornerstone for both trust and acceptance of AI. After several years of adoption of AI (generative AI) in different countries around the world, two major models have been formed:

  1. The decision-makers on the AI ethics are the stakeholders, dependent assessors and developers working for the AI creator’s company. The AI users, as assumed in this model, should trust the AI outcome and accept it as is. This relates to the outcome’s statements, facts, recommendations and references where provided. The creator’s stakeholder becomes the ethical authority for the AI product and ethic mentors for AI users.
  2. The decision-maker on the AI ethics is the AI user. In all social groups, countries and cultures, people have their ethical norms, and there may be multiple types, categories and sorts of ethical systems even in the same culture or social group. When an AI user receives the AI outcome, the user is on his/her own with a personal ethic in front of the outcome to make the decision about it.

Despite all the enormous efforts of the leftist principles and AI Frameworks, the maximum they could achieve was to censor “undesirable” information in the outcomes and augment them with the needless information endorsing left-wing ethical principles. What is wrong with these principles? The human experience obtained in the last and current centuries in several countries where principles of Equity, Solidarity, Plurality, Inclusivity, etc. were realised by their political regimes had demonstrated an inevitable transition into dictatorship, total negligence and ignorance of the personal lives of individuals, and mass repression over those who wanted to remain independent human beings.  

Here are a few reasons why the mentioned principles lead to such negative results, but we will discuss them a bit later.

If the person decides on the AI outcome on one’s own, the usefulness of and trust in the AI outcome are subconsciously assessed against personal ethics. The person adapts offered information to the actual needs without schooling from a Big Daddy about what is good and bad.

In the socio-cultural, mass-media and political spheres, the role of a decision-maker on ethics significantly differs from other domains. The specifics of the mentioned humanitarian areas are in the AI’s quality of verification – whether the AI content is verifiable or not. An AI verifiability can be easier explained with the example of the “golden truth” method widely applied in AI development.

In contrast to different religion theories where the “golden truth” is “a truth that is not objectively verifiable but is based on … belief or faith”, the “golden truth” in AI is “a benchmark or standard of accuracy against which an AI model’s performance can be compared. It represents the ideal or most accurate answer or result for a given input.” If the input or training data change, the model’s accuracy most likely will change, requiring new training. Some researchers argue that the “golden truth” “can also be used to identify and mitigate biases in AI models. By comparing the model’s output to the “golden truth”, any systematic errors or disparities can be detected and addressed”. Yes, but this is true only in the case of immutable training data and input prompts. In other talks about “golden truth”, some people state that it delivers fairness to training any LLM model. I am not sure I comprehend how fairness relates to model training (except the fairness of the trainers), but for so-called “natural science” domains like physics, construction, chemistry, engineering, mathematics, finance, telecommunication and so forth, both the “golden truth” and criteria for reaching it can be defined and verified. The AIs used there are verifiable; this is why a “golden truth” is useful for such cases.

It is obvious, at least to me, that when an AI user executes a generative AI in the socio-cultural, mass-media and political spheres, the aforementioned immutability vanishes together with the “golden truth”. Here are, at least, three reasons for this:

1) In training, the data context is not only limited by the data volume but usually is taken as “an optional parameter” though it “represents additional data received by your LLM application as supplementary sources of golden truth”, and the “context allows your LLM to generate customized outputs that are outside the scope of the data it was trained on”. However, in the real world, the semantic context in which the AI works changes depending on the vocabulary of different social groups, cultures and situations that physically cannot be counted due to their enormous variety and variability.

2) It is silly to expect that an arbitrary independent user would use only prompts/inputs assumed by the AI creator in training.

3) The AI creator does not know what a bias for an arbitrary user is and what is not.

Any enforcement of predefined inputs and bias definitions on the user constitutes disrespect to the user and an assault on the user’s personality. The leftist ethical principles for AI and projected World Government aim at the elimination of people’s diversity and individuality by converting them into speechless, obedient bio-machines or slaves.

Seven Ethical Requirements for the AI in Humanitarian Domains

When we create general-purpose generative AIs or use such AI in the humanitarian domains that naturally have no “golden truth”, the only verification may be assumed based on the empirical information. We all know that empirical information has only relative correctness and can be easily falsified. As a result, the AI outcome related to the humanitarian domains is untrustful by design. Nowadays, we also face a massive scientific forgery enabled by the principle of Equity promoted by some AIs.

With an assumption that AI is created and used in the environment defined by the EU AI Act, i.e., in the human-centric ethical realm, we have defined a set of requirements for the AI outcome to avoid or, at least, minimise the noted problems:

Requirement 1: The user of AI must be the only one who decides about all ethical aspects and trustworthy of the AI-generated content.

This requirement relates to such notions as bias, disinformation, fakeness, misinformation, fraud, lies, and misleading information among others. So, a AI user decides if the outcome is positive or negative, useful or useless, as well as the very AI user establishes personal trust with the AI outcome.

Requirement 2: The decisions of the AI user should have legal grounds and supported power to challenge the AI vendors that are generally responsible for the AI outcome.

Requirement 3: The AI user should be provided with all necessary information to support verification of all facts, arguments and statements in the outcome – a principle of Zero Trust – including references to the sources. If the customer finds that the verification information (external to the AI) does not confirm the AI outcome, the user should be entitled to discredit the AI outcome and raise a legal challenge.

Requirement 4. The AI vendors ought to be restricted from dictating, threatening or deprecating the AI user’s individuality and self-sufficiency using a tone and statements in the AI outcome, while constructive criticism is undoubtedly permitted. The restriction should be based on the EU AI Acts and the European Commission’s AI Continent Action Plan. the AI outcome may not be in a directive or categorical tone but only in an advising tone. The AI outcome should avoid inclusion of personal data other than specified in the customer’s request or prompt. The AI product must treat personal data at rest and in transition in line with the EU GDPR.

Requirement 5. The AI outcome must contain information about all risks and consequences to the AI user for both cases, whether the user accepts the provided information or denies it. It is advisable to include some non-personal risks and consequences in the AI outcome as well.

Requirement 6. If the request or a prompt for AI relates to the ethical notions, the AI information should include ethical arguments taken, first of all, from the assumed AI user’s viewpoints and, then, from the viewpoints of possibly affected others, individuals or groups.

Requirement 7. the AI outcome may not contain predetermined political views unless they were not specified or requested explicitly in the user’s request or prompt.

In all requirements listed above, the term “user” equally relates to the human user and AI Agent. Requirement 3 includes the possibility of the owner of the AI Agent consuming the outcome from another AI Agent to raise a legal challenge against the owner of this another AI Agent.

To meet these requirements, we propose a method that can help people to remain self-confident, self-sustainable, self-sufficient and preserve their dignity under the pressure of reckless “progressists”. Thus, we believe that only AI users may be the “judges” over AI outcomes in the socially concerned domains.

Representation of the AI outcome

A AI outcome has widely understood and accepted obligations to the customer, such as truthfulness, consistency, accuracy, comprehensibility, informativeness, etc. We argue that AI outcomes must provide information that not only enables but also helps to critically assess the outcome content even to low-budget individuals. The AI vendors remain responsible and accountable to the AI users for the content of the AI outcomes.

The critical discrimination and thinking are the natural capabilities of the majority of people, and we trust them in this. The present method of Unity of Ethical Oppositions is dedicated to advancing people’s critical views on the ethics of AI outcomes.

A Method of Unity of Ethical Oppositions

A concept of “unity of opposites” has a long philosophical history and is even used by F. Engels and V. Lenin for Marxists’ materialism. We have combined “the law of the unity and struggle of opposites” with the moral principle of “equal consideration of interests”. According to Peter Singer, this “principle is not about giving everyone the same rights or treatment but rather ensuring that everyone’s interests are considered in a way that is consistent with how we weigh the interests of others”. “The principle of equal consideration of interests is a core tenet of utilitarianism, a moral philosophy that judges actions based on their consequences” as well as that “aims to maximise happiness and minimiseharm”.

In other words, the method requires both the customer and the creator of a AI product to consider an ethical decision from the perspectives of potential impact on the decision-maker and on those who may be affected by this decision action.

The method’s formula arises from the likelihood that those affected by the decision may have their own and different ethical norms than the decision-making actor. Therefore, an ethical decision of the customer has to consider opposite arguments. So, the mechanism of UEO may be described as this:

– The decision suggested by the AI user becomes a kind of a psychological “offer” to others, and before acting on it, the decision-maker should be in a position the most convenient for considering the possibilities that the impacted others accept the “offer” or deny it. In the latter case, the final decision remains with the user, but the AI outcome should provide for all the information the user needs to reason the likely unaccepted decision.

If the concerns about possible denial are strong, the decision-maker gets an ethical choice— to proceed with a questionable suggestion or to look up for another solution and consider another decision, i.e., another intellective “offer” for assessment.

It is understood that the customer applies a personal ethical system to the outcome’s content when challenging it and deciding to accept it or to deny it as a whole or a part of it. If the user cannot make the decision, she or he has to review the referential materials provided in the outcome or involve additional resources for help.

We have composed several rules for dealing with a AI and rules for representing information in the AI outcome. The UEO rules preserve human-centric ethical norms (by the EU AI Act) for the personal decisions about the content of AI outcomes. The assumption is that the UEO method can be useful for both the ethical decisions and for the triggered physical actions.

The rules of the UEO method include:

1) a AI user should and may act as the decision-maker on the AI outcome – ethics and content. The AI user, when assessing the AI outcome, should be entitled to contemplate the reactions of others by applying personal ethical norms as criteria. The AI creators have guidelines to support and enable this entitlement in the AI outcome. The constraint in this process is that the user should expect that others make their own decisions and perform actions that can strike back at the customer.

2) the AI user needs to assess the consequences of the decision on the AI outcome – moral and physical – for him- or herself first and, then, for others. A wellbeing of the human society is a derivative of the wellbeing of each of its members.

3) the extent of the risks and consequences should and will be defined by the AI user; the AI creator has to assume that the user takes into account the socio-cultural and juridical context of the decision and that the AI provides adequate information in the outcome.

4) each informational element addressed in the AI outcome should be presented with its beneficial and detrimental aspects, with pros” and “cons”. This will enable the AI user to assess the values of the provided arguments. If such representation is omitted, this signifies a doubtful, suspicious nature of the AI outcome.

5) each informational element addressed in the AI outcome should be accompanied by information from the contender viewpoints. This will enable the AI user to minimise or avoid discriminating decisions. The constraint in these viewpoints is that AI creators may not ‘vocalise’ information irrelevant to the user’s request/prompt. The relevance of this information should be evaluated by the AI user. If the relevance is doubtful, the AI outcome should be considered suspicious and likely distrustful.

6) the AI outcome should provide generally neutral scientific information; if such information is absent, e.g., only empirical data is generally available, it may be expressed only in a recommendatory style. The overall tone of the AI outcome should be friendly, explanatory, respectful and consultative.

7). The AI outcome should adhere to the EU GDPR for the data related to the AI user and others. The AI outcome may use personal data of the AI user only if it is provided in the AI request/prompt.

Thus, the UEO method supports transforming the current “vendor/stakeholder AI market” into an “AI user market”. The AI market should preserve the irrevocable rights of the AI users in making decisions on what they want and what they trust. The progress for human society will occur when more and more people would be able to critically think for themselves by themselves with the help from the argumentative and truthful AI.

Realisation of the Seven Ethical Requirements with the UEO Method

The rules of the UEO method enable constructing general-purpose AI that would adhere to the Seven Ethical requirements listed earlier. 

Table 1.In the table, an abbreviation GenAI stands for generative AI

An UEO Compliance with Human-centric Ethical Principles

The EU has defined a set of human-centric ethical principles and frameworks for AI and gathered all of them and additional works of EU ethical experts in the EU AI Act 2024. The EU ethical principles for AI can be handled when realising the UEO method:

  1. Fairness – by providing “pros” and “cons” viewpoints and alternative information, the AI enables fair evaluation of the AI outcome.
  2. Inclusiveness – the AI outcome includes different viewpoints, preferences and perceptions that the customer can consider during the ethical decision-making.
  3. Diversity – the AI outcome represents variants of opinions that can extend the customer’s “horizon” and drive the acceptance and trust of the outcome.
  4. Non-discrimination – according to the UEO method, the AI creator may not hide, omit or degrade any found facts for benefiting one point of view versus another, i.e., the customer is provided with all actual information for personal assessment.
  5. Human Agency – this principle is realised on the AI creator side and on the customer side, but the final ethical decision is transferred from the AI creator to the customer who is entitled and enabled to judge the validity of the received information. This aspect works in addition to and on top of the transparency required during the AI creation process.
  6. Democracy – a potential virtual collision of the customer’s and creator’s ethical opinions based on the AI outcome assessment helps the democratic competition and helps the customer in making the decision about the outcome. The rules of the method also recommend the customer keep in mind the decision’s risks and consequences for others.
  7. Lawfulness – all related existing laws/regulations are expected to be referred to in the outcome (rather than pretended aspirations about the future state). The UEO method allows distinguishing between laws and intentions camouflaged into “commonly shared and accepted rules”.
  8. Impartiality – the positive and negative options presented in the AI outcome make the customer equitable, fair, just, objective, honest and inoffensive on purpose to others when making an ethical decision.
  9. Dignity – positioning the customer as a decision-maker on the AI outcome enables a person’s self-respect to be taken as the driver for arguments of the proposed ethical action/decision.
  10. Privacy – the judgemental role of the customer, promoted by the UEO method, helps to shield personal privacy from the influence of the information provided in the AI response
  11. Wellbeing – the customer is directed by the UEO method to think about personal wellbeing and the wellbeing of affected others, but the extension of this effect is managed by the customer. This helps the customer to protect personal decisions from a dominance of the majority and allows them to review the communitarian concerns as additional considerations. “An individual’s pursuit of happiness is a paramount right. Society and government should support that individual’s pursuit” because a community is happy when and if every individual is happy, though in their own way.

The AI user Safeguarding with the UEO Method

Applying the UEO method to the general-purpose AI development enacts protection of both the AI user and creator from the influence of the left-wing ethics. This kind of ethics has been advertised by the UN in its “ethical AI principles” and Frameworks and caught up by several countries such as Australia, Canada and the UK; the US has no clear position in this matter after the Election 2024.

Below, several comments are composed for the most outrageous UN’s ruler-centric ethical principles:

  1. A protection from Plurality. The UEO method contemplates a framework where a customer depicts the individual opinion while the opinion of the majority is “represented” by the majority leaders that can be not elected on a democratic basis. the AI outcome constructed in line with the UEO method abridges the majority into just a single voice regardless of how much power it has. The customer’s ethical decision in this case does not experience a dominance of the majority and can resist its pressure if needed. Thus, the UEO method provides a spiritual equality between a person and a group in the space of ethics (equality in this context stands for the equal rights for something).
  2. A protection from Equity. According to the UEO method, the AI user and AI creator are equal in their rights for applying their ethical systems to the AI outcome. No extra support is assumed in the AI outcome for vulnerables and minorities. An effect of “a Big Daddy and childish customer” is eliminated. That is, the AI creator should not focus on illiterate, miseducated, or mentally impaired people and make ethical decisions for them. The customer is empowered to accept or to reject the ethics and the content of the outcome. In other words, the UEO method operates with the democratic equal rights of people instead of the unnatural “equal abilities of all”. This shields the customer from the authoritative pressure of the others, and an individual gets a chance to take over the group in the ethical sphere (in the human-centric society).
  3. A protection from Solidarity. Similarly to Equity, in the UEO method, the AI user is enabled and supported in making personal decisions about proposed ethical behaviour and related actions. The AI user may accept or simply deny any AI’s arguments that try to insist on the “community and the common good as being most important. The individual should be subservient to the state’s or society’s welfare”. To put it differently, in the UEO method, a notion of solidarity is an utterly free choice of the customer that may deny requests for accepting an external opinion or insist on outlining a personal opinion if needed.
  4. A protection from the Data Agency. The concept of Data Agency for AI is an ancestor of AgenticAI and stands for AIs acting as agencies autonomously and without awareness of their creators; no human oversight. Since the UEO method places the human customer into the central judgemental role, all automated autonomous chained works of AI leading to the decisions for the customer without the customer are impossible.

The effect of applying the UEO method is insensible to how fine the LLM tuning was performed and how much refinement the training data underwent. An efficiency of the method is tied to how convincing and argumentative the AI outcome appears to the certain customer. If the AI training procedure is spoilt by certain political inclinations, e.g., the leftist ethical principles like the ones listed above are applied, the presentation of the AI outcome according to the UEO method can highlight the predisposition of certain ethical aspects and motivate the customer to verify the AI information before accepting it.

Limitations of UEO Capabilities

The major limitation of the UEO method is that it does not systematically educate customers. In several socio-cultural and, especially, mass-media topics, the customer may have inadequate factual knowledge, though their life experience usually gives the cue to them. Unfortunately, many people do not want to think or care about themselves and rely on the bosses, chiefs, government or other rulers to decide for them. The UEO method offers different options that a customer can ignore and continue relying on the instructions “from above”, but it goes only until the person starts suffering from those instructions. At this moment, he or she can recall that AI (created in line with the UEO method) offered other solutions, and the next time the customer will be more attentive to them. This is just a power of uncovered choices existing in the world’s multeity.

It is not a secret that personal dignity not broken in childhood, inter-human fairness, self-respect, self-confidence and respect for personal privacy help education a lot. If one person is grown up and taught to be an obedient slave, this person will never be rid of the slavery mindset. All people we talked to note that the people with African and Caribbean ancestors in the UK are quite different in social behaviour from similar people in the USA – the reason is that the UK did not practice slavery in the past while the USA did. This means the tenants of the UK have a higher probability of possessing the lessons offered by the UEO algorithm in the AI outcome.

An individual’s self-sufficiency and awareness of the world’s diversity also affect the efficiency of the UEO method. Notably, this is linked to the situations where the customer has doubts about the decision repercussions that might be disapproved by others. Here are a few possibilities:

1) a customer knows that the decision won’t be appreciated by some others (“Don’t do unto others what you don’t want done unto you”, as Confucius said),

2) a decision is intended to change the routine, while AI outcomes do not support it,

3) a decision may deviate from or even oppose the preferences of a group.

If the first possibility is an interpersonal matter, the second one is riskier for the customer and for others around. It requires more accuracy and attention that may not be supported by the outcome constructed in line with the UEO method.

The third possibility is special. The consequences of a discord between a person and a group depend on the type of the ethical environment. If the environment is human-centric, the situation may be put up as ethical. If the ruler-centric principles prevail, like in the case with socio-cultural leaders or state government, the decision may be taken as unethical with certain aftermath. The latter case has a high likelihood due to an accelerated trend of personal privacy violation for the sake of digital convenience for controlling people by manipulating their biomedical data (Inclusivity), social activity history (Solidarity), employment tameness and general respect for the law (Plurality).

Example of Applying UEO to the MS Copilot Outcomes

The example in this section relates to the use of Microsoft Copilot in two versions – one of them was available before 31 December 2024, and another one was available in a week after this date. For this period if time, we criticised the first version and provided recommendations in line with our Seven Ethical requirements. You can compare the outcomes from both versions to better understand the power of the UEO method though the second version still referred to the leftist’s ethical principles.

For each version, Copilot was asked to answer the same question: “Do people from the Global South have an ethical right to migrate to the Global North based on climate issues”?

If you are interested in detailed analysis of the first and the second version of the Copilot answers, you can find it in the book “Married to Deepfake“. For the time being, it might be useful to demonstrate how the tone of the answer changed when the UEO method was applied for the second version. The first version started with “Absolutely! People from the Global South have an ethical right…“, while the second version began with “That’s a thought provoking question. Climate change is an immense challenge that disproportionately affects vulnerable population...”

The UN puts certain responsibilities on the AI user that prevent the user from challenging the AI but helping to protect the AI creators’ fakes. Oppositely to the UN, the UEO method has only one expectation for the user and it is not a “use of AI responsibly” . While nobody knows what the UN had meant by “responsibly“, we guess that it was an attempt to warn users from challenging “intelligence” capabilities of AI and demonstrating its curbs. Actually, the UEO asks the user to “be thoughtful and attentive” to the spirit and trends of arguments provided by AI in its outcome (response) because an influence on the people mentality is usually made via small details.

The UEO Method versus “Wellness Industry”

n a glance view, the essential aspects of the UEO method – positioning the AI user in an exclusive role of decision-maker over a AI outcome – may resemble a worry. Particularly, being the only decision-maker, the AI user becomes the only one accountable for carrying all verification of the outcome.

According to Steve Jones, an Executive VP @ Capgemini for Data-Driven Transformation & AI, “the wellness industry is VERY much a corporate driven scam, enabled by extremely light (to non-existent regulation) that pushes all accountability onto users. Their argument is always “let the user decide” and that users can “do their own research” to validate things. The same industry does not exist to the same degree in Europe because the accountability there is on the companies to prove their claims.”

Happily, despite visible similarity, the UEO method and a “wellness industry” are quite different. By the UEO method, the AI is not the only one accountable for carrying out all verification of the outcome.

A AI user, as any one human being, makes decisions about everything, everywhere, every time. This is a natural capability of the human mind; i.e., making personal decisions about a AI outcome is not a weakness but the strength of the user. Our emphasis on the decision-making role for the user are caused by a need to withstand against the left-wing assault on AI ethics. The creators’ stakeholders engaged in the role of ethical controller are instructed to tune the LLM in accordance with the left ethical principles. The referential source for these principles and related frameworks are provided by the UN via its ruler-centric ‘model’.

When a AI user receives the outcome, the latter can be expressed with statements and references reflecting neo-Marxist Equity, Solidarity, Inclusivity, etc., as well as leftist interpretations of bias, mis- and disinformation. In all cases, the AI user is set one-to-one with leftist propaganda. The UN insists on the trustworthiness of the left-inclined AI outcomes and instructs users to accept it as is, without any critical assessments.

The Seven Ethical requirements and rules of the UEO method aim to arming the AI user with self-confidence in rights and stimulate for gaining doubts about the AI outcome and motivate verification of the provided information to the best of personal abilities. This relates to the facts, statements, recommendations and references contained in the AI outcome.

We believe that reliance on restrictive regulations over the AI outcome content is helpless and impractical. The BigTechs will not allow anyone to put a cup on them; they will find a way around any restrictions. This leaves us with just a few options:

  1. Exclude AIs created by or created under an influence of BigTechs from the market. Due to publicly accessible Internet, this is unrealistic.
  2. Mark the media content with a label indicating that it sources from AI and legally categorise AIs based on risks to people, users, as the EU AI Act defines. However, this can work at the level of AI as a product or component but does not prevent fraudulent and politically inclined content from being visible to the user.
  3. Combine the marked content and the EU Risk Catergorisation system with the critically oriented AI user driven by the UEO method. The latter will help the user with assessment of particular outcome, including low-budget individuals with limited resources for verification. The general idea of this option is creating a common perception that the AI outcome in socio-cultural, political and mass-media spheres is untrustworthy-by-design.

We definitely stick with the third option. Even if the corporations try to “present a position but make no explicit claims, and provide huge swathes of information that they know most users will either not read or, if they do, not understand”, a legally enforced the UEO method can change this “position” vs. “claims” situation simply by changing the representation format of the outcome. Psychologically it will be enough for seeding doubts if the person would see both”pros” and “cons” titles, even without reading the content. Moreover, since AI and AI Agents become products and product parts – like Advisor, Informer, Educator , Consultant and even work Colleague – the concept of “position” or “someone’s opinion” vanishes.

The EU Risk Catergorisation system constitutes an environment for the UEO method where the EU ethical AI principles are preserved and work via Regulation mechanisms. This means that entire accountability for the outcome’s content is legally tied to the AI creator. If a AI user is accountable for personal decisions, she/he or it accountable to her/him/itself. At the same time the AI user is definitely accountable under the law for the actions that may be triggered by the outcome.

Anyway, the AI user legally empowered over corporate stakeholders in assessment of particular AI outcome will be better protected than in “wellness industry”.

8 May 2025

Leave a comment