
AI bias: how our perception influence outputs
4 Νοεμβρίου 2025

AI bias: how our perception influence outputs

AI bias: how our perception influence outputs
AI bias: how our perception influence outputs
AI bias: how our perception influence outputs
AI bias: how our perception influence outputs
Konstantia Moirogiorgou, Head of R&D, Seven Red Lines
AI systems often suffer from biases, i.e. anomalies in the outputs of the algorithms that arise due to assumptions in the training data or in the conditions that compose the problem to be solved. These biases are either cognitive caused by the human attempt to simplify the related information or algorithmic due to preexisting beliefs held by the developers. Of course, other types of biases may also be present like lack of complete datasets that may occur when training data don’t represent the real-world population or when AI models are trained on historical data that reflect past prejudices or accidentally favor information that confirms existing beliefs.
As a step further, there is a wide discussion about AI bias that has to move beyond only considering values to include ontology. The assumptions that precede the AI design actually reflect different ways of understanding of what exists and how it matters. Ontologies shape the boundaries of what we perceive as possible and their orientation is crucial for the consistency and fairness of AI algorithms. Since the ontological assumptions become increasingly difficult to change once implemented, the importance of examining assumptions at every level of the development pipeline is emerged.
AI systems often suffer from biases, i.e. anomalies in the outputs of the algorithms that arise due to assumptions in the training data or in the conditions that compose the problem to be solved. These biases are either cognitive caused by the human attempt to simplify the related information or algorithmic due to preexisting beliefs held by the developers. Of course, other types of biases may also be present like lack of complete datasets that may occur when training data don’t represent the real-world population or when AI models are trained on historical data that reflect past prejudices or accidentally favor information that confirms existing beliefs.
As a step further, there is a wide discussion about AI bias that has to move beyond only considering values to include ontology. The assumptions that precede the AI design actually reflect different ways of understanding of what exists and how it matters. Ontologies shape the boundaries of what we perceive as possible and their orientation is crucial for the consistency and fairness of AI algorithms. Since the ontological assumptions become increasingly difficult to change once implemented, the importance of examining assumptions at every level of the development pipeline is emerged.
AI systems often suffer from biases, i.e. anomalies in the outputs of the algorithms that arise due to assumptions in the training data or in the conditions that compose the problem to be solved. These biases are either cognitive caused by the human attempt to simplify the related information or algorithmic due to preexisting beliefs held by the developers. Of course, other types of biases may also be present like lack of complete datasets that may occur when training data don’t represent the real-world population or when AI models are trained on historical data that reflect past prejudices or accidentally favor information that confirms existing beliefs.
As a step further, there is a wide discussion about AI bias that has to move beyond only considering values to include ontology. The assumptions that precede the AI design actually reflect different ways of understanding of what exists and how it matters. Ontologies shape the boundaries of what we perceive as possible and their orientation is crucial for the consistency and fairness of AI algorithms. Since the ontological assumptions become increasingly difficult to change once implemented, the importance of examining assumptions at every level of the development pipeline is emerged.
The main real-life AI bias categories are related to racism, sexism, ageism and ableism. For example, health care risk prediction algorithm, that is used on more than 200 million U.S. citizens, demonstrated racial bias because it relied on a faulty metric that favor white patients over black patients. In 2023, a UNDP study analyzed how DALL-E 2 and Stable Diffusion visualize roles like "engineer" or "scientist" and a result of 75%-100% of AI-generated images depicted men despite real-world data showing women make up 28%-40% of STEM graduates globally. The reinforcement of biases is also obvious when AI creates images of older men for specialized jobs or, on the contrary, when AI tools undervalue job applicants over middle-age rejecting the older ones. As a final example, there are AI systems that favor able-bodied perspectives or don’t accommodate disabilities, like voice recognition software that often struggles with speech disorders or AI-powered interview platforms that evaluate job applicants by analyzing facial expressions, tone of voice and word choice against an “ideal candidate” profile.
Although Generative AI faces bias using special detection tools and algorithmic fairness techniques, it is still vulnerable to it through its training data and the human feedback when reinforcement learning is used. And so, since artificial intelligence inherits biases from humans, how can we output human decisions forced by unbiased artificial intelligence? Perhaps completely unbiased AI is unlikely to exist, as it relies on data created by humans, but it still remains a challenging goal.
The main real-life AI bias categories are related to racism, sexism, ageism and ableism. For example, health care risk prediction algorithm, that is used on more than 200 million U.S. citizens, demonstrated racial bias because it relied on a faulty metric that favor white patients over black patients. In 2023, a UNDP study analyzed how DALL-E 2 and Stable Diffusion visualize roles like "engineer" or "scientist" and a result of 75%-100% of AI-generated images depicted men despite real-world data showing women make up 28%-40% of STEM graduates globally. The reinforcement of biases is also obvious when AI creates images of older men for specialized jobs or, on the contrary, when AI tools undervalue job applicants over middle-age rejecting the older ones. As a final example, there are AI systems that favor able-bodied perspectives or don’t accommodate disabilities, like voice recognition software that often struggles with speech disorders or AI-powered interview platforms that evaluate job applicants by analyzing facial expressions, tone of voice and word choice against an “ideal candidate” profile.
Although Generative AI faces bias using special detection tools and algorithmic fairness techniques, it is still vulnerable to it through its training data and the human feedback when reinforcement learning is used. And so, since artificial intelligence inherits biases from humans, how can we output human decisions forced by unbiased artificial intelligence? Perhaps completely unbiased AI is unlikely to exist, as it relies on data created by humans, but it still remains a challenging goal.
The main real-life AI bias categories are related to racism, sexism, ageism and ableism. For example, health care risk prediction algorithm, that is used on more than 200 million U.S. citizens, demonstrated racial bias because it relied on a faulty metric that favor white patients over black patients. In 2023, a UNDP study analyzed how DALL-E 2 and Stable Diffusion visualize roles like "engineer" or "scientist" and a result of 75%-100% of AI-generated images depicted men despite real-world data showing women make up 28%-40% of STEM graduates globally. The reinforcement of biases is also obvious when AI creates images of older men for specialized jobs or, on the contrary, when AI tools undervalue job applicants over middle-age rejecting the older ones. As a final example, there are AI systems that favor able-bodied perspectives or don’t accommodate disabilities, like voice recognition software that often struggles with speech disorders or AI-powered interview platforms that evaluate job applicants by analyzing facial expressions, tone of voice and word choice against an “ideal candidate” profile.
Although Generative AI faces bias using special detection tools and algorithmic fairness techniques, it is still vulnerable to it through its training data and the human feedback when reinforcement learning is used. And so, since artificial intelligence inherits biases from humans, how can we output human decisions forced by unbiased artificial intelligence? Perhaps completely unbiased AI is unlikely to exist, as it relies on data created by humans, but it still remains a challenging goal.



Some of the actions that are crucial to be addressed when designing new AI systems include clean training dataset from assumptions on race, gender or other ideological concepts while follow the legal frameworks that regulate AI bias. Responsible AI rely on fairness, efficacy, transparency and accountability and there are regulatory frameworks enacted to achieve these goals. For example, in Europe bias mitigation must be embedded in AI lifecycles under the EU AI Act balancing, along with GDPR data-protection rules. Also, in order to ensure that AI design incorporates ethical principles and societal values for a better user interaction, actions like repetitive examination of the training data for representativeness, ongoing monitoring of system post-deployment performance are some of the steps that may guideline Responsible AI. And when it comes to our domain, product delivery by SMEs, we know that when SMEs build fair and unbiased systems, they actually build trust, with robust safeguards to prevent harm, ensure security and mitigate risks.
Some of the actions that are crucial to be addressed when designing new AI systems include clean training dataset from assumptions on race, gender or other ideological concepts while follow the legal frameworks that regulate AI bias. Responsible AI rely on fairness, efficacy, transparency and accountability and there are regulatory frameworks enacted to achieve these goals. For example, in Europe bias mitigation must be embedded in AI lifecycles under the EU AI Act balancing, along with GDPR data-protection rules. Also, in order to ensure that AI design incorporates ethical principles and societal values for a better user interaction, actions like repetitive examination of the training data for representativeness, ongoing monitoring of system post-deployment performance are some of the steps that may guideline Responsible AI. And when it comes to our domain, product delivery by SMEs, we know that when SMEs build fair and unbiased systems, they actually build trust, with robust safeguards to prevent harm, ensure security and mitigate risks.
Some of the actions that are crucial to be addressed when designing new AI systems include clean training dataset from assumptions on race, gender or other ideological concepts while follow the legal frameworks that regulate AI bias. Responsible AI rely on fairness, efficacy, transparency and accountability and there are regulatory frameworks enacted to achieve these goals. For example, in Europe bias mitigation must be embedded in AI lifecycles under the EU AI Act balancing, along with GDPR data-protection rules. Also, in order to ensure that AI design incorporates ethical principles and societal values for a better user interaction, actions like repetitive examination of the training data for representativeness, ongoing monitoring of system post-deployment performance are some of the steps that may guideline Responsible AI. And when it comes to our domain, product delivery by SMEs, we know that when SMEs build fair and unbiased systems, they actually build trust, with robust safeguards to prevent harm, ensure security and mitigate risks.
Διαβάστε επίσης...

23 Δεκ 2025
Top 5 Power BI features of 2025

23 Δεκ 2025
Top 5 Power BI features of 2025

23 Δεκ 2025
Top 5 Power BI features of 2025

22 Σεπ 2025
The strategic value of the new Power BI Card Visual

22 Σεπ 2025
The strategic value of the new Power BI Card Visual

22 Σεπ 2025
The strategic value of the new Power BI Card Visual

16 Σεπ 2025
Silver Award στα BITE Awards 2025 στην κατηγορία «Best Use of Data»

16 Σεπ 2025
Silver Award στα BITE Awards 2025 στην κατηγορία «Best Use of Data»

16 Σεπ 2025
