On 08.08.2020 (Saturday) and 09.08.2020 (Sunday) the first AI for People Workshop will be held. The event will be held online and is entirely free. Register here.
AI for People was born out of the idea of shaping Artificial Intelligent technology around human and societal needs. We believe that technology should respect the anthropocentric principle. It should be at the service of people, not vice-versa. In order to foster this idea, we need to narrow the gap between civil society and technical experts. This gap is one in knowledge, in action and in tools for change.
In this spirit, we want to share our knowledge through a hands-on workshop with everyone. We are glad to announce 7 speakers offering talks of a diversity of topics about and around Artificial Intelligence. There are 2 basic courses which require little to no programming experience, 4 advanced courses with varying degree of technical depth and one invited speaker.
The Schedule:
The Topics and Speaker:
The course is free for everyone, yet we ask you to register in advance. As a non-profit organisation all of our work and effort is voluntary. If you like the workshop, we suggest a donation of 15€ for the entire workshop (aiforpeople.org/supporters).
Attendance is limited through the virtual classroom and entry is based on first-registered-first-served, therefore we ask you to register as soon as possible here: REGISTRATION
If you have any questions, feel free to contact us at: aiapplications@46.101.110.35
1st AI for People Workshop was originally published in AI for People on Medium, where people are continuing the conversation by highlighting and responding to this story.
On 08.08.2020 (Saturday) and 09.08.2020 (Sunday) the first AI for People Workshop will be held. The event will be held online and is entirely free. Register here.
AI for People was born out of the idea of shaping Artificial Intelligent technology around human and societal needs. We believe that technology should respect the anthropocentric principle. It should be at the service of people, not vice-versa. In order to foster this idea, we need to narrow the gap between civil society and technical experts. This gap is one in knowledge, in action and in tools for change.
In this spirit, we want to share our knowledge through a hands-on workshop with everyone. We are glad to announce 7 speakers offering talks of a diversity of topics about and around Artificial Intelligence. There are 2 basic courses which require little to no programming experience, 4 advanced courses with varying degree of technical depth and one invited speaker.
The Schedule:
The Topics and Speaker:
The course is free for everyone, yet we ask you to register in advance. As a non-profit organisation all of our work and effort is voluntary. If you like the workshop, we suggest a donation of 15€ for the entire workshop (aiforpeople.org/supporters).
Attendance is limited through the virtual classroom and entry is based on first-registered-first-served, therefore we ask you to register as soon as possible here: REGISTRATION
If you have any questions, feel free to contact us at: aiapplications@46.101.110.35
1st AI for People Workshop was originally published in AI for People on Medium, where people are continuing the conversation by highlighting and responding to this story.
In the last part of our technical analysis, we have explained how you can create your own corpus of tweets about a certain topic. In this part, we want to investigate some research hypothesis and test them on our corpus. Most of this code will be easy to understand and simple to implement.
The focus of this article is to intersect between the rapid news development, fake news, trustworthy journalism and conspiracies — and deliver a reproducible and beginner friendly introduction for those who want to have a look at the data themselves. Not so much in order to compete with proper journalism, but in order to shed some light on how trends can be observed and how science, media and journalism can approach these problems using computational tools.
Creating a Corpus
From the 20.03.2020 until the 11.05.2020 we collected about 25,000 tweets every day (except for 10.04, 15.04 and 16.04 — due to some access issues on Twitter) with the keywords related to the Covid-19 epidemic (as described in the last post). Now, we need to come up with some processing steps to create the corpus we want to investigate.
Therefore, all of the available “datasets” that are linked here and most of those that you can find, will only store the Tweet-ID for the certain corpus to be created. Consequently, you’ll have to download all tweets yourself again. You can achieve this with the free software tool Hydrator. It takes your list of Tweet-IDs and automatically downloads all tweets.
Now, you have either created a collection of tweets yourself or you have used an available database of Tweet-IDs to create your collection of tweets. There is one more consideration before we are going to investigate a corpus. On Twitter a lot of content is created by a small group of users. Therefore, we have filtered out multiple tweets from the same users (as explained in the previous episode of this article). Ideally, the resulting corpus should be stored as text file with one tweet per line, including the date. In this tutorial, we will use the following format: YYYY-MM-DD ::: TWEET (Year, month and day ::: one tweet per line).
Research Questions
There are a lot of factors and features that we can investigate within a corpus. Let us address the following research questions with a more refined analysis:
Research Question 01: How many Covid-19 tweets are about “China” (i.e. #China, #Chinese)?
Research Question 02: Among those “China”-tweets, how many feature sinophobic keywords (i.e. #Chinavirus, chink, etc)?
Research Question 03: Given Research Question 02, what percentage is this of the total number of tweets collected generally about the pandemic? And of the ones about the pandemic and china?
Research Question 04: Does this percentage change over time?
All of these research questions do not try to prove statistical significance. We are performing this analysis on a very small corpus in order to show an example of how such an analysis can be conducted with little effort and little data.
In order to create our subcorpus, which features only those tweets that are about China, we have to apply a filter:
# opening a new subcorpus file to write (w) in with open("china_subcorpus.txt", "w") as f_out:
with open("covid19_corpus.txt", "r") as f_in: lines = f_in.readlines()
for line in lines: if "china" in line.lower() or "chinese" in line.lower(): f_out.write(line)
Here, we have simply read our corpus line by line and if the line contains the word “china” or “chinese”, we write the line into a new file. Note that we have used “line.lower()” to match China = china and Chinese = chinese in case these words were spelled differently in the tweets.
Research Question 01
Now, we can calculate the proportion of tweets that include China/Chinese from the total corpus. We know that in our example, the corpus held 638,358 tweets about Covid-19 — in code, you could query “len(lines)” from the code above to inspect the length of the corpus. Therefore, we can proceed with:
chinese_corpus= len(all_tweets)
print(“Percentage of Chinese tweets from entire corpus: %.2f%%” % ((chinese_corpus/ENTIRE_CORPUS)*100))
This will evaluate to: Percentage of Chinese tweets from entire corpus: 2.63%. And consequently we can answer our research question. We can always extend this question at this point and compare this number to “German” or “English” occurrences. But let us move on to the next research question.
Research Question 02
We now want to identify how many of those “Chinese”-tweets are sinophobic. We could also extend this to the entire corpus, but for now we want to proceed with a sub-corpus analysis. First of all, we need to come up with a list of sinophobic expressions. For this, we had a look at the of English derogatory terms on the Wikipedia entry “Anti-Chinese sentiment”. We’ve also included expressions from the paper “Go eat a bat, Chang!: An Early Look on the Emergence of Sinophobic Behavior on Web Communities in the Face of COVID-19" by Schild, L., Ling, C., Blackburn, J., Stringhini, G., Zhang, Y., & Zannettou, S. (2020). We can now define our list of sinophobic words as follows:
Additionally, we could also include sinophobic compound expressions. For example, we could define a list of negative words e.g. “stupid”, “weak”, “f#cking”, “damn”, “ugly” and check if these words appear before “China” or “Chinese”.
Now, we can run this set of keywords against our sub-corpus and count the occurrences of each keyword. As we want to know which sinophobic word occurs a lot and which does not, we store all sinophobic keywords in a dictionary and initialize their count at 0:
# make a dictionary out of the keywords with the value # being the count for its occurrence sinophobic_occurrences = dict()
for keyword in sinophobic_keywords: # initialize them all with counter being 0 sinophobic_occurrences[keyword.lower()] = 0
# all_tweets[timestamp]:tweet with tweets # being the value, therefore iterate over the values() for tweet in all_tweets.values(): # accessing the tweet (the text is stored in tweet[1]) tweet = tweet[1] # for every sinophobic keyword in our set for sino_word in sinophobic_keywords: # check if the sinophobic keyword can be found in the tweet if sino_word in tweet.lower(): # here we know which tweet has # what kind of sinophobic word sinophobic_occurrences[sino_word.lower()] += 1
print(sinophobic_occurrences)
This code will provide us with the result: {‘kungfuvirus’: 2, ‘bugmen’: 0, ‘chingchong’: 1, ‘chinazi’: 30, ‘bugland’: 0, ‘chinesevirus’: 1305, ‘chinavirus’: 980, ‘chankoro’: 0, ‘gook’: 0, ‘insectoid’: 0, ‘chinesewuhanvirus’: 153, ‘chink’: 4, ‘wuhanvirus’: 698}. In order to answer the research question, we need to count the individual tweets that have at least one sinophobic word, as opposed to our previous count that can include multiple sinophobic terms in a single tweet. We could adapt the code above to do that or we can just iterate over the corpus:
single_sinophobic_occurrences = 0
for tweet in all_tweets.values(): # accessing the tweet (the timestamp is stored in tweet[0]) tweet = tweet[1] # for every sinophobic keyword in our set for sino_word in sinophobic_keywords: # check if the sinophobic keyword can be found in the tweet if sino_word in tweet.lower(): single_sinophobic_occurrences += 1 break
print("Number of sino. tweets: "+str(single_sinophobic_occurrences))
This results in: Number of sino. tweets: 2531 and we can head to our third research question.
Research Question 03
We can now answer the third research question by putting all of our numbers in place and evaluate the percentages:
print("Percentage of Sinophobic tweets from Chinese sub-corpus: %.2f%%" % ((total_num_sinophobic_tweets/total_num_covid_chinese_tweets)*100)) print("Percentage of Sinophobic tweets from entire corpus: %.2f%%" % ((total_num_sinophobic_tweets/total_num_covid_tweets)*100))
The result here is: Percentage of Sinophobic tweets from Chinese sub-corpus: 15.02%, Percentage of Sinophobic tweets from entire corpus: 0.39%. Note that the sinophobic tweets are only counted in the Chinese-subcorpus. There might as well be sinophobic tweets in the Covid-19 corpus that do not contain “China” or “Chinese”. This could be investigated further. Overall, it is an interesting result to observe more than 15% of all tweets that are about Covid-19 and China feature at least one sinophobic term.
Research Question 04
We can now perform a temporal analysis. Luckily, we have stored the time-stamps in our dictionary of tweets. We now need to parse this information and visualize it somehow. Here is a guideline of how we can proceed:
Make a dictionary that stores all days from first to last day of the corpus, including any missing days.
Go over the corpus again and look for sinophobic tweets, basically copy the code from above and paste it.
This time, whenever you find a tweet containing a sinophobic tweet, increment the value for the respective day.
Now, you’ll have a dictionary for every day with the number of counted sinophobic tweets on that day.
Then divide each of those counted values by the total number of tweets of that day to receive the proportional value.
# import libraries to handle dates from datetime import datetime, date, timedelta
# create dictionaries to store tweets days_dict_sino = dict() days_dict_all = dict()
# access the date entry (index 0) of the first tweet (index 0) start_date_string = list(all_tweets.values())[0][0].split(" ")[0]
# access the date entry (index 0) of the last tweet (index -1) end_date_string = list(all_tweets.values())[-1][0].split(" ")[0]
# format the date start_date = datetime.strptime(start_date_string, '%Y-%m-%d') end_date = datetime.strptime(end_date_string, '%Y-%m-%d')
delta = end_date - start_date days = [start_date + timedelta(days=i) for i in range(delta.days + 1)]
# Create daily dictionaries for day in days: day = day.strftime("%Y-%m-%d") days_dict_sino[day] = 0 days_dict_all[day] = 0
# Fill daily dictionaries with counts of sinophobic tweets for dat, tweet in all_tweets.values(): day = dat.split(" ")[0] for sino_word in sinophobic_keywords: if sino_word in tweet.lower(): days_dict_sino[day]+=1 days_dict_all[day]+=1
After we have done that, we should have a list with absolute occurrences of sinophobic tweets in our China-Covid19-Corpus per day. Next we need to turn those numbers to proportional values:
for tot_tweets, sino_tweets in zip(all_daily_tweets, all_daily_sino_tweets): if tot_tweets == 0: perc_results.append(0) else: perc_results.append((sino_tweets/tot_tweets)*100)
We can now simply use a plot function to visualize the trend over time. There are many ways of visualizing this kind of data, but we will look at a very basic bar graph using matplotlib:
As a result we can observe the ratio of sinophobic tweets is relatively stable, except for a the last few days. Interpreting this graph and the data is the task of the next Conceptual article. Notably, we have included three days with missing data in the graph. Whenever you are collecting data on Twitter over the course of a few weeks, you can expect that there can be issues with the Twitter API. All other linked datasets also show missing data on other days.
We can now say that we have provided the empirical evaluation to discuss our research questions and the conceptual underpinnings. Our goal in this post was not to provide a strict, statistical analysis or hard empirical evidence — those can be found in numerous scientific articles (here, here, here and here).
In the next article, we can finally start to apply some Machine Learning. We will use topic modeling in order to see what other topics emerge in our corpus, i.e. what do people talk about when they talk about China and Covid-19. The next technical article will be accompanied by a conceptual article which will better explain the findings overall.
In this article, we will have a look at the technical underpinnings of the first episode of our series on “Analyzing online discourse for everyone”. In this first part, we will concern ourselves with the data acquisition process. We will outline how to approach a data mining task and how to implement it.
In order to obtain data about an online discourse, we first need to answer a few questions:
What is considered online discourse?
How can we access online discourse?
How much online discourse do we need to observe?
We can consider online discourse to be everything discussed in an online environment. As much as newspapers, government websites and academic articles are concerned, we want to focus on a broader section of the online discourse, one that has almost no filters: Twitter.
Twitter is ideal, because there are about 330 million monthly active users (as of Q1 in 2019). Of these, more than 40 percent use the service on a daily basis creating about 500 million tweets per day. Furthermore, those tweets are mostly freely accessible! In fact, this provides so much data that we need to create our own filters. Our first step is therefore to see what we can access and what we actually need to access:
We want:
Data over a period of time
Data relating to a certain topic
Data of a considerable proportion
We get:
The free Twitter API allows access to the last 7–10 days of tweets
Everyday has about 500.000.000 tweets
With collection restriction (rate limits), we can collect about 10.000/hour.
Now, we need to put together what-we-want and what-we-get. In this tutorial we will write python code that has three simple requirements: Python 3.7+, the Tweepy Packageand a Twitter account.
There are a lot of great tutorials that explain the use of Tweepy, how to create Twitterbots and probably also how to obtain data from Twitter and using it. Therefore, we will keep it short here and do not explain what the tool is that we use, but we will explain how we use this tool in detail.
We go to the Twitter Developer page and login with out Twitter credentials (you might need to apply for a developer account, which is fairly easy and briefly done). Next, we will have to create an App and generate our access credentials. Those credentials will be the key to connect the Tweepy API with the Python program. Make sure to store a copy of the access tokens:
Now, we have everything to start writing our Python code. First of all, we need to import the Tweepy package (install with “pip install tweepy”) and we will have to write our access credentials and tokens into the code:
api = tw.API(auth, wait_on_rate_limit=True, wait_on_rate_limit_notify=True)
What is happening in this code above? Well, we use tweepy to create an Authentication Handler and store it in “auth” using the consumer key and consumer secret — basically we define through which door we want to access Twitter and with the “auth.set_access_token(…)” we provide the key to access the door. Now, the open door will be stored with certain parameters in “api”. One of those parameters is the door (“auth”) and the other one here is called “wait_on_rate_limit=True”. We can see in the Tweepy API that this parameter decides “Whether or not to automatically wait for rate limits to replenish”. Why?
The free Twitter access comes with a rate limit, i.e. you can only download a certain number of tweets before you need to wait or before Twitter kicks you out. But: “Rate limits are divided into 15 minute intervals.” When we set wait_on_rate_limit to True, we will have our program wait automatically 15 minutes so that Twitter does not lock us out, whenever we exceed the rate limit and we automatically continue to get new data!
Now, we need to specify the parameters for our search:
start_day: Date of beginning to crawl data in format YYYY-MM-DD. It can only be 7 days in the past.
end_day: Date of ending to crawl data in format YYYY-MM-DD. If you want to crawl for a single day, set this to the day after the start_day.
amount: Specify how many tweets you want to collect. Maybe take 15 for the beginning to test everything.
label: In order to store the data, you need to label it, otherwise you'll override it every time!
search_words: This is a string that combines your search words with AND or OR connection. We will look at an example of this.
# stores the data here as: test_2020–04–06_15 label = "test_"+start_day+"_n"+str(amount)
search_words = "#covid19 OR #coronavirus OR #ncov2019 OR #2019ncov OR #nCoV OR #nCoV2019 OR #2019nCoV OR #COVID19 -filter:retweets"
The parameters above will collect a test sample of 50 (amount) tweets from the 23rd of May to 24th of May — so just 50 tweets from one day. And those tweets will be stored in the file “test_2020–05–23_n50”. We now apply the first filter, otherwise we would just collect any sort of tweet from that day. Our search words are common hashtags of the Covid19 discourse: #covid19 #coronavirus etc. Furthermore, we want to look at tweets and not re-tweets, therefore we exclude re-tweets with “-filter:retweets”. Now, we can start to obtain the data and run:
Here, we further set the language to “en” = English and the tweet mode to “extended”, which makes sure the entire tweet is stored. The rest of the parameters are as we have defined them before. Now, in the next two lines of code, we simply reformat the obtained tweets into a list and print the first tweet just to have a look:
tweets = [tweet for tweet in tweets] print(tweets[0])
As you can see, this is a ton of information. Number of retweets, number of likes, coordinates, profile background image url… everything about that single tweet! That is why we now filter for the user.id and the full_text. If you want you can also access other information such as location etc, but for now we are not interested in that. Have a look at the following code, before you can find its explanation below:
first_entry = None last_entry = None
all_user_ids = [] raw_tweets = [] for tweet in tweets:
This code creates an empty list for all the user ids and then iterates over all tweets. It looks at the created_at field of a tweet to check whether it is the first entry (because we initially set first_entry to None). Now it checks if tweet.user.id is not in the list of all_user_ids. This means it only looks at tweets from users we have not seen yet. Why did we do that?
A scientific analysis of fake news spread during the 2016 US presidential election showed that about 1% of users accounted for 80% of fake news and report that other research suggests that 80% of all tweets can be linked to the top 10% of most tweeting users. Therefore, in order to have a representation of a diverse opinion that cannot be linked to a few but many users, we filter out multiple tweets from the same user.
Then our code appends the user id (as we now have seen the user) and stores the full tweet. The replace statement (replace(“\n”, “ “)) just gets rid of line-breaks in tweets. The if full_tweet is checked, because we could have an empty tweet (which sometimes is a bug of the api). We print the full tweet (the “\n — — — — — “ is a line break and some dashes so it looks nicer when printed). And store each full tweet in a list called raw_tweets. Finally, we access the created_at field to get the date of creation when we have reached the very last tweet. Our script will then print some of the collected tweets, which could look like this:
First tweet collected at: 2020-04-06 23:59:59 ------------------------------------------- User #20I3120348: we will get through this together #Covid19 ------------ User #203480I312: They fear #Trump2020. They created this version of #coronavirus Just to get him out of office. Looks like the plan worked in the UK... ------------ User #96902235II37193185: Like millions of others I don't see eye to eye with Boris Johnson but I hope he pulls through. Why? Because I'm human. I wouldn't wish this on my worst enemy. I've witnessed someone die of pneumonia and believe me it's NOT pretty. #GetWellBoris #PrayForBoris #COVID19
And that is it for the first part! We have now collected 50 tweets from the 23rd of May 2020 that relate to the Covid19 discourse. Hopefully, it is clear how this script can be extended to create an entire corpus of thousands of tweets over multiple days. Such corpus has thankfully be created by various researchers, including ourselves. With this corpus we can then start to investigate the relation between the Covid19 discourse and Sinophobia.
In the next article of this series, we’ll look at some Natural Language Processing, Data Analysis and Topic Modeling to assess the data we have collected!
References
[1] Bucket-wheel excavator 286, Inden surface mine, Germany; the bucket-wheel is under repair. 10. April 2016. https://pixabay.com/en/open-pit-mining-raw-materials-1327116/ pixel2013 (Silvia & Frank) Edit: Cropped and overlay of numbers. CC0 1.0.
In this article, we will have a look at the technical underpinnings of the first episode of our series on “Analyzing online discourse for everyone”. In this first part, we will concern ourselves with the data acquisition process. We will outline how to approach a data mining task and how to implement it.
In order to obtain data about an online discourse, we first need to answer a few questions:
What is considered online discourse?
How can we access online discourse?
How much online discourse do we need to observe?
We can consider online discourse to be everything discussed in an online environment. As much as newspapers, government websites and academic articles are concerned, we want to focus on a broader section of the online discourse, one that has almost no filters: Twitter.
Twitter is ideal, because there are about 330 million monthly active users (as of Q1 in 2019). Of these, more than 40 percent use the service on a daily basis creating about 500 million tweets per day. Furthermore, those tweets are mostly freely accessible! In fact, this provides so much data that we need to create our own filters. Our first step is therefore to see what we can access and what we actually need to access:
We want:
Data over a period of time
Data relating to a certain topic
Data of a considerable proportion
We get:
The free Twitter API allows access to the last 7–10 days of tweets
Everyday has about 500.000.000 tweets
With collection restriction (rate limits), we can collect about 10.000/hour.
Now, we need to put together what-we-want and what-we-get. In this tutorial we will write python code that has three simple requirements: Python 3.7+, the Tweepy Packageand a Twitter account.
There are a lot of great tutorials that explain the use of Tweepy, how to create Twitterbots and probably also how to obtain data from Twitter and using it. Therefore, we will keep it short here and do not explain what the tool is that we use, but we will explain how we use this tool in detail.
We go to the Twitter Developer page and login with out Twitter credentials (you might need to apply for a developer account, which is fairly easy and briefly done). Next, we will have to create an App and generate our access credentials. Those credentials will be the key to connect the Tweepy API with the Python program. Make sure to store a copy of the access tokens:
Now, we have everything to start writing our Python code. First of all, we need to import the Tweepy package (install with “pip install tweepy”) and we will have to write our access credentials and tokens into the code:
api = tw.API(auth, wait_on_rate_limit=True, wait_on_rate_limit_notify=True)
What is happening in this code above? Well, we use tweepy to create an Authentication Handler and store it in “auth” using the consumer key and consumer secret — basically we define through which door we want to access Twitter and with the “auth.set_access_token(…)” we provide the key to access the door. Now, the open door will be stored with certain parameters in “api”. One of those parameters is the door (“auth”) and the other one here is called “wait_on_rate_limit=True”. We can see in the Tweepy API that this parameter decides “Whether or not to automatically wait for rate limits to replenish”. Why?
The free Twitter access comes with a rate limit, i.e. you can only download a certain number of tweets before you need to wait or before Twitter kicks you out. But: “Rate limits are divided into 15 minute intervals.” When we set wait_on_rate_limit to True, we will have our program wait automatically 15 minutes so that Twitter does not lock us out, whenever we exceed the rate limit and we automatically continue to get new data!
Now, we need to specify the parameters for our search:
start_day: Date of beginning to crawl data in format YYYY-MM-DD. It can only be 7 days in the past.
end_day: Date of ending to crawl data in format YYYY-MM-DD. If you want to crawl for a single day, set this to the day after the start_day.
amount: Specify how many tweets you want to collect. Maybe take 15 for the beginning to test everything.
label: In order to store the data, you need to label it, otherwise you'll override it every time!
search_words: This is a string that combines your search words with AND or OR connection. We will look at an example of this.
# stores the data here as: test_2020–04–06_15 label = "test_"+start_day+"_n"+str(amount)
search_words = "#covid19 OR #coronavirus OR #ncov2019 OR #2019ncov OR #nCoV OR #nCoV2019 OR #2019nCoV OR #COVID19 -filter:retweets"
The parameters above will collect a test sample of 50 (amount) tweets from the 23rd of May to 24th of May — so just 50 tweets from one day. And those tweets will be stored in the file “test_2020–05–23_n50”. We now apply the first filter, otherwise we would just collect any sort of tweet from that day. Our search words are common hashtags of the Covid19 discourse: #covid19 #coronavirus etc. Furthermore, we want to look at tweets and not re-tweets, therefore we exclude re-tweets with “-filter:retweets”. Now, we can start to obtain the data and run:
Here, we further set the language to “en” = English and the tweet mode to “extended”, which makes sure the entire tweet is stored. The rest of the parameters are as we have defined them before. Now, in the next two lines of code, we simply reformat the obtained tweets into a list and print the first tweet just to have a look:
tweets = [tweet for tweet in tweets] print(tweets[0])
As you can see, this is a ton of information. Number of retweets, number of likes, coordinates, profile background image url… everything about that single tweet! That is why we now filter for the user.id and the full_text. If you want you can also access other information such as location etc, but for now we are not interested in that. Have a look at the following code, before you can find its explanation below:
first_entry = None last_entry = None
all_user_ids = [] raw_tweets = [] for tweet in tweets:
This code creates an empty list for all the user ids and then iterates over all tweets. It looks at the created_at field of a tweet to check whether it is the first entry (because we initially set first_entry to None). Now it checks if tweet.user.id is not in the list of all_user_ids. This means it only looks at tweets from users we have not seen yet. Why did we do that?
A scientific analysis of fake news spread during the 2016 US presidential election showed that about 1% of users accounted for 80% of fake news and report that other research suggests that 80% of all tweets can be linked to the top 10% of most tweeting users. Therefore, in order to have a representation of a diverse opinion that cannot be linked to a few but many users, we filter out multiple tweets from the same user.
Then our code appends the user id (as we now have seen the user) and stores the full tweet. The replace statement (replace(“\n”, “ “)) just gets rid of line-breaks in tweets. The if full_tweet is checked, because we could have an empty tweet (which sometimes is a bug of the api). We print the full tweet (the “\n — — — — — “ is a line break and some dashes so it looks nicer when printed). And store each full tweet in a list called raw_tweets. Finally, we access the created_at field to get the date of creation when we have reached the very last tweet. Our script will then print some of the collected tweets, which could look like this:
First tweet collected at: 2020-04-06 23:59:59 ------------------------------------------- User #20I3120348: we will get through this together #Covid19 ------------ User #203480I312: They fear #Trump2020. They created this version of #coronavirus Just to get him out of office. Looks like the plan worked in the UK... ------------ User #96902235II37193185: Like millions of others I don't see eye to eye with Boris Johnson but I hope he pulls through. Why? Because I'm human. I wouldn't wish this on my worst enemy. I've witnessed someone die of pneumonia and believe me it's NOT pretty. #GetWellBoris #PrayForBoris #COVID19
And that is it for the first part! We have now collected 50 tweets from the 23rd of May 2020 that relate to the Covid19 discourse. Hopefully, it is clear how this script can be extended to create an entire corpus of thousands of tweets over multiple days. Such corpus has thankfully be created by various researchers, including ourselves. With this corpus we can then start to investigate the relation between the Covid19 discourse and Sinophobia.
In the next article of this series, we’ll look at some Natural Language Processing, Data Analysis and Topic Modeling to assess the data we have collected!
References
[1] Bucket-wheel excavator 286, Inden surface mine, Germany; the bucket-wheel is under repair. 10. April 2016. https://pixabay.com/en/open-pit-mining-raw-materials-1327116/ pixel2013 (Silvia & Frank) Edit: Cropped and overlay of numbers. CC0 1.0.
Leading researchers of the field of Artificial Intelligence met to discuss the future of (human/artificial) intelligence and its implications on society in the first Interdisciplinary Summer School on Artificial Intelligence, from the 5th to the 7th of June in Vila Nova da Cerveira, Portugal. Members of the AI for People association were present to gain a perspective on current trends in AI that reflect on societal benefits and problems. In the following article, we provide a brief overview of the topics discussed in the talks at the conference and highlight implications for societal advantages or disadvantages of AI progress. Notably, not all the talks have been summarised as we focused only on those that were considered relatable to the attending members of AI for People.
Computational Creativity
Tony Veale from the Creative Language Systems Group at UCD, provides an overview of Computational Creativity (CC). This research domain aims to create machines that create meaning. Creativity is thought of as the final frontier in artificial intelligence research [1]. Creative computer systems are already a reality in our society, whether it is generated fake-news, computer-generated art or music. But are those systems truly creative or mere generative systems? The CC domain does not aim by all means to replace artists and writers with machines, but tries to develop tools which can be used in a co-creative process. Such semi-autonomous creative systems can provide computation power to explore the creative space that would not be accessible to the creators on their own. Prof. Veale’s battery of twitter-bots aims to provoke the creation of interaction within the vibrant and dynamic twitter community [2]. The holy grail of CC — developing truly creative systems, capable of criticising, explaining and creating their own masterpieces — is still considered at debatable reach.
Implications: We see Artificial Intelligence as something logical, reasonable and efficient. Often, we associate its influence with economy and technology. We might overlook that the domain of creativity, which is in its core a developing society within its culture, art and communication, equally affected by AI. We need to become aware of this influence, which works on the one hand in favour of the creative human potential by providing powerful tools that can help us develop new ideas. On the other hand, there is the potential of underestimating this creative influence and falling for fake-news and alike. The former is the benevolent use of CC, whereas the latter is the malicious (ab)use of CC.
Machine Learning in History of Science
Jochen Büttner from the MPIWG Berlin presented new tools for a long-established discipline: Using machine learning approaches for corpus research in the history of science. Büttner presents the starting point as the extraction of knowledge from the analysis of an ancient literature corpus. Conventional methods, e.g. manual identification of similar illustrations among different documents, are highly time-consuming and seen as impractical. However, machine learning techniques provide a solution to such tasks.
Büttner explained how different techniques are being utilised to detect illustrations on digitised books and identify clusters of illustration, based on the use of the same woodblocks in the printing process (shared between printers or passed on).
Implications: The research provides an interesting example of how one research field (history of science) can greatly benefit from another (artificial intelligence). With only 6 months of AI experience Prof. Büttner can achieve results that otherwise would be years of effort. Yet, from an AI perspective, the implementation is rather naive. The resulting divergence of abstract machine learning research with actual applications in other domains is clear as specialised algorithms could be used to yield better results. Challenges addressed by the talk are the rapid pace of development in ML, which already seems to be overwhelming when specialising only on Machine Learning. Overall, ML requires a rather high demand in mathematical computational understanding, which makes it even harder for foreign domains to gain access. Therefore, it is key to provide adequate educational paths for everyone and encourage the application of AI by establishing adequate publication formats, which will in return foster interdisciplinary dialogue.
Artificial Intelligence Engineering: A Critical View
The industry talk had been given by Paulo Gomes Head of AI at Critical Software. Gomes provides insights from someone who has worked for years in research switching to industry. The company is involved in several projects that use Machine Learning: identification of anomalous behaviour in vessels (navigation problems, drug traffic, illegal fishing), prediction of phone signal usage to prevent mobile network shutdowns, optimization of car manufacturing energy consumption or even decision making in stress situations in the military context. The variety of addressed domains shows the range of involvement of AI in our ‘technologised’ society.
Implications: The talk also addresses the critical gap between what companies promise and what is actually possible with AI. This gap is not only bad to the economy, but directly harmful to people. As AI will grow, the expectations have already grown much higher than what can be achieved in neither research nor industry. The AI-hype about the massive leaps in technology due to recent developments in deep learning are somewhat justified, triggering a “New Arms Race for AI” [3] between the USA, Russia and China. The talk points out that this technological bump fits into the scheme of the hype cycle for emerging technologies with Deep Learning as the technology trigger (see image).
Suddenly, every company needs to open up an AI department even though there are too few people with the actual experience in the field. A wave of job quitting and career swapping is currently being observed. Nonetheless, in most cases people find themselves with little field experience in a company that has even less — low knowledge growth and lack of appreciation due to little understanding from the company’s side. These people might end up jumping in front of rather than on top of the AI hype train.
Why superintelligent AI will never exist
This talk was given by Luc Steels from VUB Artificial Intelligence Lab (now at the evolutionary biology department at Pompeu Fabra University). In a similar fashion to the previous talk, Steels outlines the rise of AI technologies in research, economy and politics. The cycle is described in a somewhat different way and can be found in various other phenomena: Climate change has been discussed for decades, but it had been given very little actual attention in politics and economics. It is only when people are faced with the immediate consequences that politics and economics start to pick it up. In the race for AI technology, we can observe this first underestimation by a lack of development, i.e. for a long time AI struggled with its establishment in the academic world and found little attention in economy. Now, we are facing an overestimation in which everyone is creating higher and higher expectations. Why is it, that the promised superintelligent AI will not exist? Here are a few examples and implications from Steels:
Most Deep Learning systems are very dataset-specific and task-specific. For example, systems that are trained to recognize dogs, fail to recognize other animals or dog images that are turned upside-down. The features learned by the algorithm are irrelevant when it comes to human categorization of reality.
It is said that these problems can be overcome by more data. But many of those problems are due to the distribution and the probability within the data and those will not change. That is, those systems do not learn global context, even if presented with more data.
Language systems can be trained without a task and can be provided massive amounts of context. Yet, language is a dynamic, evolving system that changes strongly over time and context. Therefore, language models would lose their validity quickly unless they are retrained on a regular basis, which is a ridiculously effortful computation.
“A deep-learning system doesn’t have any explanatory power, the more powerful the deep-learning system becomes, the more opaque it can become. As more features are extracted, the diagnosis becomes increasingly accurate. Why these features were extracted out of millions of other features, however, remains an unanswerable question.”
Geoffrey Hinton, computer scientist at the University of Toronto — founding father of neural networks
The systems learn from our data, not our knowledge. Therefore, in some cases these systems do not apply any sort of common sense and take our biases into their models. For example, Microsoft’s Tay chatbot that starting spreading anti-semitism after only a few hours online [4].
Reinforcement learning algorithms are implemented to optimize the traffic on a web-page and not to provide content. Consequently, click-baits are more valuable to the algorithm than useful information.
Conclusion
This summer school was the first of its kind, a collaboration of the AI associations from Spain and Portugal. Despite the reduced number of participants and the lack of female speakers, this first interdisciplinary platform for the AI community provided a basic discussion about the implications of AI and its future. More people should be educated about the illusionary expectations created by the AI hype in order to prevent any damage to research and society. The author would like to thank João M. Cunha and Matteo Fabbri for their contributions to this article.
References:
[1] Colton, Simon, and Geraint A. Wiggins. “Computational creativity: The final frontier?.” Ecai. Vol. 12. 2012.
[2] Veale, Tony, and Mike Cook. Twitterbots: Making Machines that Make Meaning. MIT Press, 2018.
[3] Barnes, Julian E., and Josh Chin. “The New Arms Race in AI.” The Wall Street Journal 2 (2018).
[4] Wolf, Marty J., K. Miller, and Frances S. Grodzinsky. “Why we should have seen that coming: comments on Microsoft’s tay experiment, and wider implications.” ACM SIGCAS Computers and Society 47.3 (2017): 54–64.
This blog post has not been written by an AI. This blog post has been written by a human intelligence pursuing a PhD in Artificial Intelligence. Although, the first sentence seems to be trivial it might not be so in the near future. If we can no longer distinguish a machine from a human during a phone call conversation, as Google Duplex has promised, we should start to be suspicious about textual content on the web. Bots are already made responsible for 24% of all tweets on Twitter. Who is responsible for all this spam?
But really, this blog post has not been written by an AI — trust me. If it were, it would be much smarter, more eloquent and intelligent, because eventually AI systems will make better decisions than humans. And the whole argument about responsible AI, is more of an argument about how we define better in the previous sentence. But first let’s point out, that the ongoing discussion about responsible AI often conflates at least two levels of understanding algorithms:
Artificial Intelligence in the sense of machine learning applications
General Artificial Intelligence in the sense of an above-human-intelligence system
This blog post does not aim to blur the line between humans and machines, neither does it aim to provide answers to ethical questions that arise from artificial intelligence. In fact, this blog post simply tries to contrast the conflated layers of AI responsibility and presents a few contemporary approaches at either of those layers.
Artificial Intelligence in the sense of machine learning applications
In recent years, we have definitely reached the first level of AI that does already present us with an ethical dilemma in a variety of applications: Autonomous cars, automated manufacturing and chatbots. Who is responsible for the accident by self-driving car? Can a car decide in face of a moral dilemma that even humans struggle to agree on? How can technical advances can be combined with education programs (human resource development) to help workers practice new sophisticated skills so as not to lose their jobs? Do we need to declare online identities (is it a person or a bot?). How do we control for manipulation of emotions through social bots?
These are all questions that we are already facing. The artificial intelligence that gives rise to these questions is a controllable system, that means that its human creator (the programmer, company or government) can decide how the algorithm should be designed such that the resulting behaviour abides whatever rules follow from the answers to the given questions. The responsibility is therefore with the human. The same way we sell hammers, which can be used as a tool or abused as a weapon, we are not responsible for malicious abuse of AI systems. Whether for good or bad, these AI systems show adaptability, interaction and autonomy, which can be layered with their respective confines.
Autonomy has to act within the bounds of responsibility, which includes the chain of responsible actors: If we give full autonomy to a system, we cannot take responsibility for its actions, but as we do not have fully autonomous systems yet, the responsibility is with the programmers followed by some supervision, which normally follows company standards. Within this well-established chain of responsibility that is in place in most industrial companies, we need to locate the responsibilities for AI system with respect to their degree of autonomy. The other two properties, adaptability and interaction, do directly contribute to the responsibility we can have over a system. If we allow full interaction of the system, we lose accountability, hence we give away responsibility again. Accountability cannot only be about the algorithms, but about the interaction must provide an explanation and justification to be accountable and consequently responsible. Each of these values is more than just a difficult balancing act, they pose intricate challenges in their very definition. Consider explainability of accountable AI, we already see the surge of an entire field called XAI (Explainable Artificial Intelligence). Nonetheless, we cannot simply start explaining AI algorithms on the basis of their code for everyone, firstly we need to come up with a feasible level of an explanation. Do we make the code open-source and leave the explanation to the user? Do we provide security labels? Can we define quality standards for AI?
The latter has been suggested by the High-Level Expert Group on AI of the European AI Alliance. This group of 52 experts includes engineers, researchers, economists, lawyers and philosophers from an academic, non-academic, corporate and non-corporate institutions. The first draft on Ethics Guidelines For Trustworthy AI proposes guiding principles, investment and policy strategies and in particular advises on how to use AI to build an impact in Europe by leveraging Europe’s enablers of AI.
On the one hand, the challenges seem to be broad and coming up with ethics guidelines that encompass all possible scenarios appears to be a daunting task. On the other hand, all of these questions are not new to us. Aviation has a thorough and practicable set of guidelines, laws and regulations that allow us to trust systems which already are mostly autonomous and yet we do not ask for an explanation of its autopilot software. We cannot simply transfer those rules for all autonomous applications, but we should concern ourselves with the importance of those guidelines and not condemn the task.
General Artificial Intelligence in the sense of an above-human-intelligence system
In the previous discussion, we have seen that the problems that arise from the first level of AI systems does impact us today and that we are dealing with those problems one way or the other. The discussion should be different if we talk about General Artificial Intelligence. Here, we assume that at some point the computing power of a machine supersedes not only the computing power of a human brain (which is already the case), but gives rise to an intelligence that supersedes human intelligence. At this point, it has been argued that this will trigger an unprecedented jump in technological growth, resulting in incalculable changes to human civilization — the so-called technological singularity.
In this scenario, we no longer deal with a tractable algorithm, as the super-intelligence will be capable of rewriting any sort of rule or guideline it deems to be trivial. There would be no way of preventing the system to breach any security barrier that has been constructed using human intelligence. There are many scenarios which predict that such an intelligence will eventually get rid of humans or will enslave mankind (see Bostrom’s Superintelligence or The Wachowskis’ Matrix Trilogy). But there is also a surge of serious research institutions, which aim to argue for alternative scenarios and how we can align such an AI system with our values. We see that this second level of AI has much larger consequences with questions that can only be based on theoretical assumptions, rather than pragmatic guidelines or implementations.
An issue that arises from the conflation of the two layers is that people tend to mistrust a self-driving car, as they attribute some form of general intelligence to the system that is not (yet) there. Currently, autonomous self-driving cars only avoid obstacles and are not even aware of the type of object (man, child, dog). Furthermore, all the apocalyptic scenarios contain the same sort of fallacy, they argue using human logic. We simply cannot conceive a logic that would supersede our cognition. Any sort of ethical principle, moral guideline or logical conclusion we want to attribute the AI with, has been derived from thousands of years of human reasoning. A super-intelligent system might therefore evolve this reasoning within a split second to a degree of which it would takes us another thousands of years understanding this step. Therefore, any sort of imagination we have about the future past the point of a super-intelligence is as imaginative as religious imaginations. Interestingly, this conflation of thoughts has lead to the founding of “The Church of Artificial Intelligence”.
Responsibility at both levels
My responsibility as an AI research is to educate people about the technology that they are using and the technology that they will be facing. In case of technology that is already in place, we have to disentangle the notion of Artificial Intelligence as an uncontrollable super-power, which will overtake humanity. As pointed out, the responsibility for responsible AI is with the governments, institutions and programmers. The former need to set guidelines and make sure that they are being followed, the latter two need to follow them. At this stage, it is all about the people to create the rules that they want the AI to follow.
Artificial intelligence is happening and it will not stop merging with our society. It is probably the strongest change of civilization since the invention of the steam engine. On the one hand, the industrial revolution lead to great progress and wealth for most of humankind. On the other hand, it lead to great destruction of our environment, climate and planet. These were consequence we did not anticipate or were willing to accept, consequences which are leading us to the brink of our own extinction if no counter-action will be taken. Similar will be true for the advent of AI that we are currently witnessing. It can lead to great benefits, wealth and progress for most of the technological world, but we are responsible that the consequence are not pushing us over the brink of extinction. Even though we might not be able to anticipate all the consequences, we as the society have the responsibility to act with caution and thoroughness. To conclude with Spiderman’s Uncle’s words “with great power comes great responsibility”. And as AI might be the greatest and last power to be created by humans, it might be too great of a responsibility or it will be smart enough to be responsible for itself.