diff --git a/.gitattributes b/.gitattributes new file mode 100644 index 0000000..1177560 --- /dev/null +++ b/.gitattributes @@ -0,0 +1,3 @@ +# correct the language detection on github +# exclude data files from linguist analysis +notebooks/* linguist-generated diff --git a/README.md b/README.md index fca56de..c721416 100644 --- a/README.md +++ b/README.md @@ -1,5 +1,11 @@ # Misinformation campaign analysis +![License: MIT](https://img.shields.io/github/license/ssciwr/misinformation) +![GitHub Workflow Status](https://img.shields.io/github/workflow/status/ssciwr/misinformation/CI) +![codecov](https://img.shields.io/codecov/c/github/ssciwr/misinformation) +![Quality Gate Status](https://sonarcloud.io/api/project_badges/measure?project=ssciwr_misinformation&metric=alert_status) +![Language](https://img.shields.io/github/languages/top/ssciwr/misinformation) + Extract data from from social media images and texts in disinformation campaigns. **_This project is currently under development!_** @@ -28,4 +34,4 @@ There are sample notebooks in the `misinformation/notebooks` folder for you to e 1. Facial analysis: Use the notebook `facial_expressions.ipynb` to identify if there are faces on the image, if they are wearing masks, and if they are not wearing masks also the race, gender and dominant emotion. 1. Object analysis: Use the notebook `ojects_expression.ipynb` to identify certain objects in the image. Currently, the following objects are being identified: person, bicycle, car, motorcycle, airplane, bus, train, truck, boat, traffic light, cell phone. -There are further notebooks that are currently of exploratory nature (`colors_expression` to identify certain colors on the image, `get-text-from-image` to extract text that is contained in an image.) \ No newline at end of file +There are further notebooks that are currently of exploratory nature (`colors_expression` to identify certain colors on the image, `get-text-from-image` to extract text that is contained in an image.)