Computing Reviews
Today's Issue Hot Topics Search Browse Recommended My Account Log In
Review Help
Search
Biases in AI systems
Srinivasan R., Chander A. Communications of the ACM64 (8):44-49,2021.Type:Article
Date Reviewed: Oct 24 2022

As Srinivasan and Chander discuss, software packages and algorithms encounter many biases related to images on the web. This is then an article on computer application control, not human user control. Machine learning (ML) and artificial intelligence (AI) architecture are macro examples of how inference tools from large unstructured data may come up with inaccurate, if not intentionally malicious, predictions. The authors further their discussion by pointing out that, more than the need for fairly designed AI algorithms, domain and nondomain experts should highlight “practical aspects that can be followed to limit and test for bias during problem formulation, data creation, data analysis, and evaluation.”

Piecewise linear reifications would, perhaps, allow systems “to learn numerical function values at a number of equidistant points in the attribute space and use linear interpolation to predict function values at other points” [1]. Furthermore, proxies can only be a snapshot of the real phenomena in a sample (so-called measurement bias); in addition, labeling may be affected by the subjective opinions of labelers (so-called label bias). Unknown mechanisms such as the halo effect bias--“the predisposition of an overall impression to influence the observer” [2]--may also have an effect.

It is not that computer applications are negatively evaluated here, more so a lack of trust is explored, for example, how computers may influence our personal expectations. Though there is much literature on the topic, the authors ask readers to understand “the structural dependencies among various features in datasets.” Creating such “dependencies” implies that our emotions are a predictive expression of whether or not a general cognitive association will lead to an evaluative disadvantage.

Reviewer:  Romina Fucà Review #: CR147506
1) Šuc, D.; Vladušič, D.; Bratko, I. Qualitatively faithful quantitative prediction. Artificial Intelligence 158 (2004), 189–214.
2) Varona, D.; Suárez, J. L. Discrimination, bias, fairness, and trustworthy AI. Applied Sciences 12 (2022), https://doi.org/10.3390/app12125826.
Bookmark and Share
  Editor Recommended
 
Would you recommend this review?
yes
no
Other reviews under "Software Architectures": Date
Software fortresses: modeling enterprise architectures
, Addison-Wesley Longman Publishing Co, Inc., Boston, MA, 2003.  277, Type: Book (9780321166081)
Sep 5 2003
Pattern-oriented software architecture: a pattern language for distributed computing (Wiley Software Patterns Series)
Buschmann F., Henney K., Schmidt D., John Wiley&Sons, 2007.  636, Type: Book (9780470059029), Reviews: (2 of 3)
Mar 11 2008
Supporting runtime software architecture: a bidirectional-transformation-based approach
Song H., Huang G., Chauvel F., Xiong Y., Hu Z., Sun Y., Mei H. Journal of Systems and Software 84(5): 711-723, 2011. Type: Article
Nov 1 2011
more...

E-Mail This Printer-Friendly
Send Your Comments
Contact Us
Reproduction in whole or in part without permission is prohibited.   Copyright 2004 Reviews.com™
Terms of Use
| Privacy Policy