Last month, I enrolled in a a course on the Ethics of AI with the London School of Economics [LSE]. The ethical dimension of tech has always been of intrigue and interest and more so with the current surgence of AI in our daily life. The course was everything I had hoped for with lots of readings, great video content, podcasts and live discussions, which true helped to consolidate content for each module. Rather I think I have enough content at this point to last me a few months 🙂
Some key, high level, points that stood out for me from the course, not in any particular order:
- In technology, AI in particular is transparency necessary for legitimacy?
- There is weak AI and strong AI, and even weak AI is Moore powerful today than human intelligence – wrap your head around that 😐
- Discriminative AI [not discriminatory] is everywhere at this point – it can sort, classify, process data and find patterns which humans can’t notice, or fundamentally can’t understand.
- Generative AI [the AI most of us know off and refer to when we talk about AI] has the ability to produce new content and that where he majority of ethical concurs stem out.
- Misinformation undermines democracy – what role does AI play in the case of information and misinformation.
- As citizens of a democratic country, what sort of decisions are we willing to delegate to AI without fully understanding how AI makes those decisions?
- Tech is a global phenomenon, so in a global economy should its benefits be globalised?
- AI will cause ‘creative destruction’ [not necessarily a bad thing] – a shift in the job market. With it it will bring benefits for some and burdens for others – the issue as I see it, at this point, is that these benefits and burdens are not equally distributed and will give riser to equity concern in society.
- A key burden the underdeveloped world faces today is the unequal access to technology.
- In the age of big date does privacy become a collective issue? ‘Consent Fatigue’- our willingness to give access to our privacy, say but clicking on the ‘I do’ button at the end of that long document set in 7 pts, is real. We are all guilty of that.
- The issue of value alignment in AI [ my favourite topic] – whose values should tech be aligned to? Who decides? How do we define values? Are values a measurable facet?
- AI was metaphorically co-related to religion in one discussion forum – over centuries, human race has given power to religion and its interpretations to dictate our value system for us. Are we now giving that same power to the machine?
- AND lastly, will tech change what it means to be human?
This is just a very tip of the ice berg, high level, gist of what we spoke about, discussed and read in this one month long course.