-
Views
-
Cite
Cite
Lauri Goldkind, Social Work and Artificial Intelligence: Into the Matrix, Social Work, Volume 66, Issue 4, October 2021, Pages 372–374, https://doi.org/10.1093/sw/swab028
- Share Icon Share
Extract
Artificial intelligence (AI) is not a single tool, rather it is a suite of algorithmic computing capacities that can perform humanlike functions across settings. AI refers to dynamic machine intelligence, including facial recognition (computer vision), perception (computer vision and speech recognition), whole language processing (chatbots and data mining), and social intelligence (emotive computing and sentiment analysis), to name a few. The actual lines of code powering AI tools are commands that tell machines what to do, which can be neutral strings of directives. However, those who program the code, the data that powers outcomes, and the social systems in which these tools are deployed all inevitably reflect existing structural inequalities. AI now powers decision making from as benign as matching drivers to those who require transportation, to ethically fraught risk management processes, including the scoring of criminal offenders for sentencing and initial triage process in child welfare (Eubanks, 2018).
The advent of these highly automated tools has prompted a demand from academic, industry, and government sectors to examine how digital decision making can act to concentrate human bias. This call to infuse ethics and social justice–centered design into computer and data science curricula and the technology sector represents a significant opportunity for social work. As a values-centered profession with a robust code of ethics, social work is uniquely positioned to engage across disciplines to inform the creation of thoughtful algorithmically enhanced policy and practice at all levels. Social work’s core values of social justice, integrity, and the primacy of relationships render us uniquely suited to assist developers as they empirically test the effectiveness of their algorithmic products. Our ethical duty to vulnerable populations requires that we monitor and assess the data and the assumptions used to train these algorithms, attending to the social implications of an emerging generation of tools.