1st AI for People Workshop

On 08.08.2020 (Saturday) and 09.08.2020 (Sunday) the first AI for People Workshop will be held. The event will be held online and is entirely free. Register here.

AI for People was born out of the idea of shaping Artificial Intelligent technology around human and societal needs. We believe that technology should respect the anthropocentric principle. It should be at the service of people, not vice-versa. In order to foster this idea, we need to narrow the gap between civil society and technical experts. This gap is one in knowledge, in action and in tools for change.

In this spirit, we want to share our knowledge through a hands-on workshop with everyone. We are glad to announce 7 speakers offering talks of a diversity of topics about and around Artificial Intelligence. There are 2 basic courses which require little to no programming experience, 4 advanced courses with varying degree of technical depth and one invited speaker.

The Schedule:

All times are CET (Central European Time)

The Topics and Speaker:

A hands-on Exercise in Natural Language Processing. The lecturer Philipp Wicke will give a brief introduction to Natural Language Processing (NLP). This lecture is very practice-oriented and Philipp will show an example of topic modeling on real data. More …
Introduction to AI Ethics. Our chair Marta Ziosi will provide a broad introduction the topics ethically relevant to AI development. Whether you have a technical or social science background, this course will present you with the trade-offs that technology faces society with and it will provide you with the conceptual tools relevant to the field of AI ethics. More …
Open-Source AI. Our invited speaker Dr. Ibrahim Haddad, Executive Director of the LF AI Foundation, will talk about open source AI. More …
Coding AI. The lecturer Kevin Trebing will give an introduction on how to create AI applications. For this, you will learn the basics of PyTorch, one of the biggest AI frameworks next to TensorFlow. More …
Cultural AI. Maurice Jones will outline how the meaning of AI as a technology is socially constructed and which role cultural factors play in this process. He will give a practical example on how different cultures create different meanings surrounding technologies. More …
Creative AI. The lecturer Gabriele Graffieti will give an introduction on creative AI, what it means for an artificial intelligence to be creative and how to instill creativity into the training process. In this lecture we’ll cover in detail generative models and in particular Generative Adversarial Networks. More …
Continual AI. The lecturer Vincenzo Lomonaco will give a brief introduction to the topic of Continual Learning for AI. This lecture is very practice-oriented and based on a slides, and runnable code on Google Colaboratory. You will be able to code and play alongside the lecture in order to acquire the basic knowledge and skills. More …

The course is free for everyone, yet we ask you to register in advance. As a non-profit organisation all of our work and effort is voluntary. If you like the workshop, we suggest a donation of 15€ for the entire workshop (aiforpeople.org/supporters).

Attendance is limited through the virtual classroom and entry is based on first-registered-first-served, therefore we ask you to register as soon as possible here: REGISTRATION

If you have any questions, feel free to contact us at: aiapplications@aiforpeople.org


1st AI for People Workshop was originally published in AI for People on Medium, where people are continuing the conversation by highlighting and responding to this story.

1st AI for People Workshop

On 08.08.2020 (Saturday) and 09.08.2020 (Sunday) the first AI for People Workshop will be held. The event will be held online and is entirely free. Register here.

AI for People was born out of the idea of shaping Artificial Intelligent technology around human and societal needs. We believe that technology should respect the anthropocentric principle. It should be at the service of people, not vice-versa. In order to foster this idea, we need to narrow the gap between civil society and technical experts. This gap is one in knowledge, in action and in tools for change.

In this spirit, we want to share our knowledge through a hands-on workshop with everyone. We are glad to announce 7 speakers offering talks of a diversity of topics about and around Artificial Intelligence. There are 2 basic courses which require little to no programming experience, 4 advanced courses with varying degree of technical depth and one invited speaker.

The Schedule:

All times are CET (Central European Time)

The Topics and Speaker:

A hands-on Exercise in Natural Language Processing. The lecturer Philipp Wicke will give a brief introduction to Natural Language Processing (NLP). This lecture is very practice-oriented and Philipp will show an example of topic modeling on real data. More …
Introduction to AI Ethics. Our chair Marta Ziosi will provide a broad introduction the topics ethically relevant to AI development. Whether you have a technical or social science background, this course will present you with the trade-offs that technology faces society with and it will provide you with the conceptual tools relevant to the field of AI ethics. More …
Open-Source AI. Our invited speaker Dr. Ibrahim Haddad, Executive Director of the LF AI Foundation, will talk about open source AI. More …
Coding AI. The lecturer Kevin Trebing will give an introduction on how to create AI applications. For this, you will learn the basics of PyTorch, one of the biggest AI frameworks next to TensorFlow. More …
Cultural AI. Maurice Jones will outline how the meaning of AI as a technology is socially constructed and which role cultural factors play in this process. He will give a practical example on how different cultures create different meanings surrounding technologies. More …
Creative AI. The lecturer Gabriele Graffieti will give an introduction on creative AI, what it means for an artificial intelligence to be creative and how to instill creativity into the training process. In this lecture we’ll cover in detail generative models and in particular Generative Adversarial Networks. More …
Continual AI. The lecturer Vincenzo Lomonaco will give a brief introduction to the topic of Continual Learning for AI. This lecture is very practice-oriented and based on a slides, and runnable code on Google Colaboratory. You will be able to code and play alongside the lecture in order to acquire the basic knowledge and skills. More …

The course is free for everyone, yet we ask you to register in advance. As a non-profit organisation all of our work and effort is voluntary. If you like the workshop, we suggest a donation of 15€ for the entire workshop (aiforpeople.org/supporters).

Attendance is limited through the virtual classroom and entry is based on first-registered-first-served, therefore we ask you to register as soon as possible here: REGISTRATION

If you have any questions, feel free to contact us at: aiapplications@aiforpeople.org


1st AI for People Workshop was originally published in AI for People on Medium, where people are continuing the conversation by highlighting and responding to this story.

The Rise of Sinophobia on Twitter during the Covid-19 Pandemic —  Technical Part 1

The Rise of Sinophobia on Twitter during the Covid-19 Pandemic — Technical Part 1

Written by Philipp Wicke and Marta Ziosi for AI For People — 23.05.2020

Follow the link: https://medium.com/ai-for-people/the-rise-of-sinophobia-on-twitter-during-the-covid-19-pandemic-conceptual-p
Follow the link: https://medium.com/ai-for-people/the-rise-of-sinophobia-on-twitter-during-the-covid-19-pandemic-conceptual-part-1-545f81a61619

In this article, we will have a look at the technical underpinnings of the first episode of our series on “Analyzing online discourse for everyone”. In this first part, we will concern ourselves with the data acquisition process. We will outline how to approach a data mining task and how to implement it.

In order to obtain data about an online discourse, we first need to answer a few questions:

  • What is considered online discourse?
  • How can we access online discourse?
  • How much online discourse do we need to observe?
[1] Data Mining. How to obtain information from the internet?

We can consider online discourse to be everything discussed in an online environment. As much as newspapers, government websites and academic articles are concerned, we want to focus on a broader section of the online discourse, one that has almost no filters: Twitter.

Twitter is ideal, because there are about 330 million monthly active users (as of Q1 in 2019). Of these, more than 40 percent use the service on a daily basis creating about 500 million tweets per day. Furthermore, those tweets are mostly freely accessible! In fact, this provides so much data that we need to create our own filters. Our first step is therefore to see what we can access and what we actually need to access:

We want:

  • Data over a period of time
  • Data relating to a certain topic
  • Data of a considerable proportion

We get:

  • The free Twitter API allows access to the last 7–10 days of tweets
  • Everyday has about 500.000.000 tweets
  • With collection restriction (rate limits), we can collect about 10.000/hour.

Now, we need to put together what-we-want and what-we-get. In this tutorial we will write python code that has three simple requirements: Python 3.7+, the Tweepy Package and a Twitter account.

The Tweepy API connects Python with Twitter.

There are a lot of great tutorials that explain the use of Tweepy, how to create Twitterbots and probably also how to obtain data from Twitter and using it. Therefore, we will keep it short here and do not explain what the tool is that we use, but we will explain how we use this tool in detail.

We go to the Twitter Developer page and login with out Twitter credentials (you might need to apply for a developer account, which is fairly easy and briefly done). Next, we will have to create an App and generate our access credentials. Those credentials will be the key to connect the Tweepy API with the Python program. Make sure to store a copy of the access tokens:

Generating Twitter API key and Access Tokens for Tweepy on the twitter-dev website.

Now, we have everything to start writing our Python code. First of all, we need to import the Tweepy package (install with “pip install tweepy”) and we will have to write our access credentials and tokens into the code:

import tweepy as tw
consumer_key= "writeYourOwnConsumerKeyHere12345"
consumer_secret= "writeYourOwnConsumerSecretHere12345"
access_token= "writeYourOwnAccessTokenHere12345"
access_token_secret= "infowriteYourOwnAccessTokenSecretHere12345"

With the right keys and tokens, we can now authenticate our access to Twitter through our Python code:

# Twitter authentication
auth = tw.OAuthHandler(consumer_key, consumer_secret)
auth.set_access_token(access_token, access_token_secret)
api = tw.API(auth, wait_on_rate_limit=True, wait_on_rate_limit_notify=True)

What is happening in this code above? Well, we use tweepy to create an Authentication Handler and store it in “auth” using the consumer key and consumer secret — basically we define through which door we want to access Twitter and with the “auth.set_access_token(…)” we provide the key to access the door. Now, the open door will be stored with certain parameters in “api”. One of those parameters is the door (“auth”) and the other one here is called “wait_on_rate_limit=True”. We can see in the Tweepy API that this parameter decides “Whether or not to automatically wait for rate limits to replenish”. Why?

The free Twitter access comes with a rate limit, i.e. you can only download a certain number of tweets before you need to wait or before Twitter kicks you out. But: “Rate limits are divided into 15 minute intervals.” When we set wait_on_rate_limit to True, we will have our program wait automatically 15 minutes so that Twitter does not lock us out, whenever we exceed the rate limit and we automatically continue to get new data!

Now, we need to specify the parameters for our search:

start_day: Date of beginning to crawl data in format YYYY-MM-DD. It can only be 7 days in the past.

end_day: Date of ending to crawl data in format YYYY-MM-DD. If you want to crawl for a single day, set this to the day after the start_day.

amount: Specify how many tweets you want to collect. Maybe take 15 for the beginning to test everything.

label: In order to store the data, you need to label it, otherwise you'll override it every time!

search_words: This is a string that combines your search words with AND or OR connection. We will look at an example of this.

start_day = "2020–05–23"
end_day = "2020–05–24"
amount = 50
# stores the data here as: test_2020–04–06_15
label = "test_"+start_day+"_n"+str(amount)
search_words = "#covid19 OR #coronavirus OR #ncov2019 OR #2019ncov OR #nCoV OR #nCoV2019 OR #2019nCoV OR #COVID19 -filter:retweets"

The parameters above will collect a test sample of 50 (amount) tweets from the 23rd of May to 24th of May — so just 50 tweets from one day. And those tweets will be stored in the file “test_2020–05–23_n50”. We now apply the first filter, otherwise we would just collect any sort of tweet from that day. Our search words are common hashtags of the Covid19 discourse: #covid19 #coronavirus etc. Furthermore, we want to look at tweets and not re-tweets, therefore we exclude re-tweets with “-filter:retweets”. Now, we can start to obtain the data and run:

tweets = tw.Cursor(api.search,
tweet_mode='extended',
q=search_words,
lang="en",
since=start_day,
until=end_day).items(amount)

Here, we further set the language to “en” = English and the tweet mode to “extended”, which makes sure the entire tweet is stored. The rest of the parameters are as we have defined them before. Now, in the next two lines of code, we simply reformat the obtained tweets into a list and print the first tweet just to have a look:

tweets = [tweet for tweet in tweets]
print(tweets[0])
Status(_api=<tweepy.api.API object at 0x0FFF37D0>, _json={‘created_at’: ‘Mon Apr 06 23:59:59 +0000 2020’, ‘id’: 124367356449479936, ‘id_str’: ‘124367356449479936’, ‘full_text’: ‘we will get through this together #Covid19’, ‘truncated’: False, ‘display_text_range’: [0, 42], ‘entities’: {‘hashtags’: [{‘text’: ‘Covid19’, ‘indices’: [34, 42]}], ‘symbols’: [], ‘user_mentions’: [], ‘urls’: []}, ‘metadata’: {‘iso_language_code’: ‘en’, ‘result_type’: ‘recent’}, ‘source’: ‘<a href=”https://mobile.twitter.com" rel=”nofollow”>Twitter Web App</a>’, ‘in_reply_to_status_id’: None, ‘in_reply_to_status_id_str’: None, ‘in_reply_to_user_id’: None, ‘in_reply_to_user_id_str’: None, ‘in_reply_to_screen_name’: None, ‘user’: {‘id’: 45367I723, ‘id_str’: ‘45367I723’, ‘name’: ‘John Doe’, ‘screen_name’: ‘jodoe’, ‘location’: ‘’, ‘description’: ‘’, ‘url’: None, ‘entities’: {‘description’: {‘urls’: []}}, ‘protected’: False, ‘followers_count’: 188, ‘friends_count’: 611 …

As you can see, this is a ton of information. Number of retweets, number of likes, coordinates, profile background image url… everything about that single tweet! That is why we now filter for the user.id and the full_text. If you want you can also access other information such as location etc, but for now we are not interested in that. Have a look at the following code, before you can find its explanation below:

first_entry = None
last_entry = None
all_user_ids = []
raw_tweets = []
for tweet in tweets:

if not first_entry:
first_entry = tweet.created_at.strftime(“%Y-%m-%d %H:%M:%S”)
print("First tweet collected at: "+str(first_entry))
print(" — — — — — — — — — — — — — — — — — — — — — -")

if tweet.user.id not in all_user_ids:
all_user_ids.append(tweet.user.id)
full_tweet = tweet.full_text.replace('\n','')
if full_tweet:
print("User #"+str(tweet.user.id)+" : ")
print(full_tweet+"\n — — — — — — ")
raw_tweets.append(full_tweet)

last_entry = tweet.created_at.strftime("%Y-%m-%d %H:%M:%S")
print("Last tweet collected at: "+str(last_entry))

This code creates an empty list for all the user ids and then iterates over all tweets. It looks at the created_at field of a tweet to check whether it is the first entry (because we initially set first_entry to None). Now it checks if tweet.user.id is not in the list of all_user_ids. This means it only looks at tweets from users we have not seen yet. Why did we do that?

A scientific analysis of fake news spread during the 2016 US presidential election showed that about 1% of users accounted for 80% of fake news and report that other research suggests that 80% of all tweets can be linked to the top 10% of most tweeting users. Therefore, in order to have a representation of a diverse opinion that cannot be linked to a few but many users, we filter out multiple tweets from the same user.

Then our code appends the user id (as we now have seen the user) and stores the full tweet. The replace statement (replace(“\n”, “ “)) just gets rid of line-breaks in tweets. The if full_tweet is checked, because we could have an empty tweet (which sometimes is a bug of the api). We print the full tweet (the “\n — — — — — “ is a line break and some dashes so it looks nicer when printed). And store each full tweet in a list called raw_tweets. Finally, we access the created_at field to get the date of creation when we have reached the very last tweet. Our script will then print some of the collected tweets, which could look like this:

First tweet collected at: 2020-04-06 23:59:59
-------------------------------------------
User #20I3120348:
we will get through this together #Covid19
------------
User #203480I312:
They fear #Trump2020. They created this version of #coronavirus Just to get him out of office. Looks like the plan worked in the UK...
------------
User #96902235II37193185:
Like millions of others I don't see eye to eye with Boris Johnson but I hope he pulls through. Why? Because I'm human. I wouldn't wish this on my worst enemy. I've witnessed someone die of pneumonia and believe me it's NOT pretty. #GetWellBoris #PrayForBoris #COVID19

And that is it for the first part! We have now collected 50 tweets from the 23rd of May 2020 that relate to the Covid19 discourse. Hopefully, it is clear how this script can be extended to create an entire corpus of thousands of tweets over multiple days. Such corpus has thankfully be created by various researchers, including ourselves. With this corpus we can then start to investigate the relation between the Covid19 discourse and Sinophobia.

In the next article of this series, we’ll look at some Natural Language Processing, Data Analysis and Topic Modeling to assess the data we have collected!

References

[1] Bucket-wheel excavator 286, Inden surface mine, Germany; the bucket-wheel is under repair. 10. April 2016. https://pixabay.com/en/open-pit-mining-raw-materials-1327116/ pixel2013 (Silvia & Frank) Edit: Cropped and overlay of numbers. CC0 1.0.


The Rise of Sinophobia on Twitter during the Covid-19 Pandemic —  Technical Part 1 was originally published in AI for People on Medium, where people are continuing the conversation by highlighting and responding to this story.

The Rise of Sinophobia on Twitter during the Covid-19 Pandemic —  Technical Part 1

The Rise of Sinophobia on Twitter during the Covid-19 Pandemic — Technical Part 1

Written by Philipp Wicke and Marta Ziosi for AI For People — 23.05.2020

Follow the link: https://medium.com/ai-for-people/the-rise-of-sinophobia-on-twitter-during-the-covid-19-pandemic-conceptual-p
Follow the link: https://medium.com/ai-for-people/the-rise-of-sinophobia-on-twitter-during-the-covid-19-pandemic-conceptual-part-1-545f81a61619

In this article, we will have a look at the technical underpinnings of the first episode of our series on “Analyzing online discourse for everyone”. In this first part, we will concern ourselves with the data acquisition process. We will outline how to approach a data mining task and how to implement it.

In order to obtain data about an online discourse, we first need to answer a few questions:

  • What is considered online discourse?
  • How can we access online discourse?
  • How much online discourse do we need to observe?
[1] Data Mining. How to obtain information from the internet?

We can consider online discourse to be everything discussed in an online environment. As much as newspapers, government websites and academic articles are concerned, we want to focus on a broader section of the online discourse, one that has almost no filters: Twitter.

Twitter is ideal, because there are about 330 million monthly active users (as of Q1 in 2019). Of these, more than 40 percent use the service on a daily basis creating about 500 million tweets per day. Furthermore, those tweets are mostly freely accessible! In fact, this provides so much data that we need to create our own filters. Our first step is therefore to see what we can access and what we actually need to access:

We want:

  • Data over a period of time
  • Data relating to a certain topic
  • Data of a considerable proportion

We get:

  • The free Twitter API allows access to the last 7–10 days of tweets
  • Everyday has about 500.000.000 tweets
  • With collection restriction (rate limits), we can collect about 10.000/hour.

Now, we need to put together what-we-want and what-we-get. In this tutorial we will write python code that has three simple requirements: Python 3.7+, the Tweepy Package and a Twitter account.

The Tweepy API connects Python with Twitter.

There are a lot of great tutorials that explain the use of Tweepy, how to create Twitterbots and probably also how to obtain data from Twitter and using it. Therefore, we will keep it short here and do not explain what the tool is that we use, but we will explain how we use this tool in detail.

We go to the Twitter Developer page and login with out Twitter credentials (you might need to apply for a developer account, which is fairly easy and briefly done). Next, we will have to create an App and generate our access credentials. Those credentials will be the key to connect the Tweepy API with the Python program. Make sure to store a copy of the access tokens:

Generating Twitter API key and Access Tokens for Tweepy on the twitter-dev website.

Now, we have everything to start writing our Python code. First of all, we need to import the Tweepy package (install with “pip install tweepy”) and we will have to write our access credentials and tokens into the code:

import tweepy as tw
consumer_key= "writeYourOwnConsumerKeyHere12345"
consumer_secret= "writeYourOwnConsumerSecretHere12345"
access_token= "writeYourOwnAccessTokenHere12345"
access_token_secret= "infowriteYourOwnAccessTokenSecretHere12345"

With the right keys and tokens, we can now authenticate our access to Twitter through our Python code:

# Twitter authentication
auth = tw.OAuthHandler(consumer_key, consumer_secret)
auth.set_access_token(access_token, access_token_secret)
api = tw.API(auth, wait_on_rate_limit=True, wait_on_rate_limit_notify=True)

What is happening in this code above? Well, we use tweepy to create an Authentication Handler and store it in “auth” using the consumer key and consumer secret — basically we define through which door we want to access Twitter and with the “auth.set_access_token(…)” we provide the key to access the door. Now, the open door will be stored with certain parameters in “api”. One of those parameters is the door (“auth”) and the other one here is called “wait_on_rate_limit=True”. We can see in the Tweepy API that this parameter decides “Whether or not to automatically wait for rate limits to replenish”. Why?

The free Twitter access comes with a rate limit, i.e. you can only download a certain number of tweets before you need to wait or before Twitter kicks you out. But: “Rate limits are divided into 15 minute intervals.” When we set wait_on_rate_limit to True, we will have our program wait automatically 15 minutes so that Twitter does not lock us out, whenever we exceed the rate limit and we automatically continue to get new data!

Now, we need to specify the parameters for our search:

start_day: Date of beginning to crawl data in format YYYY-MM-DD. It can only be 7 days in the past.

end_day: Date of ending to crawl data in format YYYY-MM-DD. If you want to crawl for a single day, set this to the day after the start_day.

amount: Specify how many tweets you want to collect. Maybe take 15 for the beginning to test everything.

label: In order to store the data, you need to label it, otherwise you'll override it every time!

search_words: This is a string that combines your search words with AND or OR connection. We will look at an example of this.

start_day = "2020–05–23"
end_day = "2020–05–24"
amount = 50
# stores the data here as: test_2020–04–06_15
label = "test_"+start_day+"_n"+str(amount)
search_words = "#covid19 OR #coronavirus OR #ncov2019 OR #2019ncov OR #nCoV OR #nCoV2019 OR #2019nCoV OR #COVID19 -filter:retweets"

The parameters above will collect a test sample of 50 (amount) tweets from the 23rd of May to 24th of May — so just 50 tweets from one day. And those tweets will be stored in the file “test_2020–05–23_n50”. We now apply the first filter, otherwise we would just collect any sort of tweet from that day. Our search words are common hashtags of the Covid19 discourse: #covid19 #coronavirus etc. Furthermore, we want to look at tweets and not re-tweets, therefore we exclude re-tweets with “-filter:retweets”. Now, we can start to obtain the data and run:

tweets = tw.Cursor(api.search,
tweet_mode='extended',
q=search_words,
lang="en",
since=start_day,
until=end_day).items(amount)

Here, we further set the language to “en” = English and the tweet mode to “extended”, which makes sure the entire tweet is stored. The rest of the parameters are as we have defined them before. Now, in the next two lines of code, we simply reformat the obtained tweets into a list and print the first tweet just to have a look:

tweets = [tweet for tweet in tweets]
print(tweets[0])
Status(_api=<tweepy.api.API object at 0x0FFF37D0>, _json={‘created_at’: ‘Mon Apr 06 23:59:59 +0000 2020’, ‘id’: 124367356449479936, ‘id_str’: ‘124367356449479936’, ‘full_text’: ‘we will get through this together #Covid19’, ‘truncated’: False, ‘display_text_range’: [0, 42], ‘entities’: {‘hashtags’: [{‘text’: ‘Covid19’, ‘indices’: [34, 42]}], ‘symbols’: [], ‘user_mentions’: [], ‘urls’: []}, ‘metadata’: {‘iso_language_code’: ‘en’, ‘result_type’: ‘recent’}, ‘source’: ‘<a href=”https://mobile.twitter.com" rel=”nofollow”>Twitter Web App</a>’, ‘in_reply_to_status_id’: None, ‘in_reply_to_status_id_str’: None, ‘in_reply_to_user_id’: None, ‘in_reply_to_user_id_str’: None, ‘in_reply_to_screen_name’: None, ‘user’: {‘id’: 45367I723, ‘id_str’: ‘45367I723’, ‘name’: ‘John Doe’, ‘screen_name’: ‘jodoe’, ‘location’: ‘’, ‘description’: ‘’, ‘url’: None, ‘entities’: {‘description’: {‘urls’: []}}, ‘protected’: False, ‘followers_count’: 188, ‘friends_count’: 611 …

As you can see, this is a ton of information. Number of retweets, number of likes, coordinates, profile background image url… everything about that single tweet! That is why we now filter for the user.id and the full_text. If you want you can also access other information such as location etc, but for now we are not interested in that. Have a look at the following code, before you can find its explanation below:

first_entry = None
last_entry = None
all_user_ids = []
raw_tweets = []
for tweet in tweets:

if not first_entry:
first_entry = tweet.created_at.strftime(“%Y-%m-%d %H:%M:%S”)
print("First tweet collected at: "+str(first_entry))
print(" — — — — — — — — — — — — — — — — — — — — — -")

if tweet.user.id not in all_user_ids:
all_user_ids.append(tweet.user.id)
full_tweet = tweet.full_text.replace('\n','')
if full_tweet:
print("User #"+str(tweet.user.id)+" : ")
print(full_tweet+"\n — — — — — — ")
raw_tweets.append(full_tweet)

last_entry = tweet.created_at.strftime("%Y-%m-%d %H:%M:%S")
print("Last tweet collected at: "+str(last_entry))

This code creates an empty list for all the user ids and then iterates over all tweets. It looks at the created_at field of a tweet to check whether it is the first entry (because we initially set first_entry to None). Now it checks if tweet.user.id is not in the list of all_user_ids. This means it only looks at tweets from users we have not seen yet. Why did we do that?

A scientific analysis of fake news spread during the 2016 US presidential election showed that about 1% of users accounted for 80% of fake news and report that other research suggests that 80% of all tweets can be linked to the top 10% of most tweeting users. Therefore, in order to have a representation of a diverse opinion that cannot be linked to a few but many users, we filter out multiple tweets from the same user.

Then our code appends the user id (as we now have seen the user) and stores the full tweet. The replace statement (replace(“\n”, “ “)) just gets rid of line-breaks in tweets. The if full_tweet is checked, because we could have an empty tweet (which sometimes is a bug of the api). We print the full tweet (the “\n — — — — — “ is a line break and some dashes so it looks nicer when printed). And store each full tweet in a list called raw_tweets. Finally, we access the created_at field to get the date of creation when we have reached the very last tweet. Our script will then print some of the collected tweets, which could look like this:

First tweet collected at: 2020-04-06 23:59:59
-------------------------------------------
User #20I3120348:
we will get through this together #Covid19
------------
User #203480I312:
They fear #Trump2020. They created this version of #coronavirus Just to get him out of office. Looks like the plan worked in the UK...
------------
User #96902235II37193185:
Like millions of others I don't see eye to eye with Boris Johnson but I hope he pulls through. Why? Because I'm human. I wouldn't wish this on my worst enemy. I've witnessed someone die of pneumonia and believe me it's NOT pretty. #GetWellBoris #PrayForBoris #COVID19

And that is it for the first part! We have now collected 50 tweets from the 23rd of May 2020 that relate to the Covid19 discourse. Hopefully, it is clear how this script can be extended to create an entire corpus of thousands of tweets over multiple days. Such corpus has thankfully be created by various researchers, including ourselves. With this corpus we can then start to investigate the relation between the Covid19 discourse and Sinophobia.

In the next article of this series, we’ll look at some Natural Language Processing, Data Analysis and Topic Modeling to assess the data we have collected!

References

[1] Bucket-wheel excavator 286, Inden surface mine, Germany; the bucket-wheel is under repair. 10. April 2016. https://pixabay.com/en/open-pit-mining-raw-materials-1327116/ pixel2013 (Silvia & Frank) Edit: Cropped and overlay of numbers. CC0 1.0.


The Rise of Sinophobia on Twitter during the Covid-19 Pandemic —  Technical Part 1 was originally published in AI for People on Medium, where people are continuing the conversation by highlighting and responding to this story.