January 26, 2020

1018 words 5 mins read

globalcitizen/2019-wuhan-coronavirus-data

globalcitizen/2019-wuhan-coronavirus-data

2019 Wuhan Coronavirus data (2019-nCoV)

repo name globalcitizen/2019-wuhan-coronavirus-data
repo link https://github.com/globalcitizen/2019-wuhan-coronavirus-data
homepage
language PHP
size (curr.) 366361 kB
stars (curr.) 495
created 2020-01-25
license GNU General Public License v3.0

2019 Wuhan Coronavirus data (COVID-19 / 2019-nCoV)

This public repository archives data over time from various public sources on the web.

Data is presented as timestamped CSV files, for maximum compatibility.

It is hoped that this data will be useful to those producing visualizations or analyses.

Code is included.

Sample animation

Shown here in GIF format. There is a better (smaller/higher resolution) webm format also generated.

image

Sample visualization

image

image

Generates static SVGs.

Source images were China_blank_province_map.svg(link) and BlankMap-World.svg(link).

Requirements

Unix-like OS with the dependencies installed (see Software Dependencies). In practice that means macOS with brew, Linux or a BSD. Windows is unsupported.

Generating

China

For a China map, the following command sequence will grab data from DXY and render it.

./build china

You now have timestamped JSON, CSV and SVG files in the data-sources/dxy/data/ subdirectory.

World

For a world map, the process is similar. Note that the BNO world data parser is currently broken and we have no plan to fix it.

./build world

You now have timestamped CSV and SVG files in data-sources/bno/data.

Software Dependencies

Probably an incomplete list:

  • bash
  • perl
  • php
  • imagemagick
  • gifsicle
  • ffmpeg
  • wget

Sources used

BNO

Includes detail on foreign sources, individual provincial update URLs. Updated once per day or so. Note this is currently broken with no plans to fix it.

DXY

High level information without specific source URLs. However, this is updated frequently and appears to be the best available data.

TODO

Other projects

How this was built (non-technical explanation)

This section is written for the curious / non-technical user.

The general approach to problems such as these is as follows:

  1. Gather the data
  2. Modify and store it
  3. Do something with it.

Gather the data

The area of programming surrounding gathering data from websites that were not explicitly designed for it is called web scraping.

In general, web scraping consists of making an HTTP (web) request to the website in question, parsing (or interpreting) the response, and extracting the data of interest. Thereafter some modification may be required.

Modify and store it

We translate some Chinese and English information (toponyms or geographic region names) in to a known format by matching against a static database file for countries and a similar file for regions in or near China.

We then store the data in various formats, mostly CSV and JSON, which are timestamped in a most to least significance format, inspired by the ISO 8601 standard, to aid in sorting.

Do something with it

Finally, we further interpret and process the data in two stages.

Static image generation

First, we transform some reference SVG maps gathered from Wikimedia Commons by applying the data we have captured.

Combine in to an animation

Finally we animate multiple such resulting images in to two formats, animated GIF and the greatly superior and far more modern webm container format with VP9 encoding. This is done using the open source tools imagemagick and ffmpeg.

comments powered by Disqus