WILL TECH CHANGE WHAT IT MEANS TO BE HUMAN?

Posted in: design, design education, general | 0

Last month, I enrolled in a a course on the Ethics of AI with the London School of Economics [LSE]. The ethical dimension of tech has always been of intrigue and interest and more so with the current surgence of AI in our daily life. The course was everything I had hoped for with lots of readings, great video content, podcasts and live discussions, which true helped to consolidate content for each module. Rather I think I have enough content at this point to last me a few months 🙂

Some key, high level, points that stood out for me from the course, not in any particular order:

  1. In technology, AI in particular is transparency necessary for legitimacy?
  2. There is weak AI and strong AI, and even weak AI is Moore powerful today than human intelligence – wrap your head around that 😐
  3. Discriminative AI [not discriminatory] is everywhere at this point – it can sort, classify,  process data and find patterns which humans can’t notice, or fundamentally can’t understand.
  4. Generative AI [the AI most of us know off and refer to when we talk about AI] has the ability to produce new content and that where he majority of ethical concurs stem out.
  5. Misinformation undermines democracy – what role does AI play in the case of information and misinformation.
  6. As citizens of a democratic country, what sort of decisions are we willing to delegate to AI without fully understanding how AI makes those decisions?
  7. Tech is a global phenomenon, so in a global economy should its benefits be globalised?
  8. AI will cause ‘creative destruction’ [not necessarily a bad thing] – a shift in the job market. With it it will bring benefits for some and burdens for others – the issue as I see it, at this point, is that these benefits and burdens are not equally distributed and will give riser to equity concern in society.
  9. A key burden the underdeveloped world faces today is the unequal access to technology.
  10. In the age of big date does privacy become a collective issue? ‘Consent Fatigue’- our willingness to give access to our privacy, say but clicking on the ‘I do’ button at the end of that long document set in  7 pts, is real. We are all guilty of that.
  11. The issue of value alignment in AI [ my favourite topic] – whose values should tech be aligned to? Who decides? How do we define values? Are values a measurable facet?
  12. AI was metaphorically co-related to religion in one discussion forum – over centuries, human race has given power to religion and its interpretations to dictate our value system for us. Are we now giving that same power to the machine?
  13. AND lastly, will tech change what it means to be human?

This is just a very tip of the ice berg, high level, gist of what we spoke about, discussed and read in this one month long course.