diff --git a/build/doctrees/ammico.doctree b/build/doctrees/ammico.doctree index 029dc39..fd64ab6 100644 Binary files a/build/doctrees/ammico.doctree and b/build/doctrees/ammico.doctree differ diff --git a/build/doctrees/environment.pickle b/build/doctrees/environment.pickle index 62d6798..578d7ae 100644 Binary files a/build/doctrees/environment.pickle and b/build/doctrees/environment.pickle differ diff --git a/build/doctrees/nbsphinx/notebooks/DemoNotebook_ammico.ipynb b/build/doctrees/nbsphinx/notebooks/DemoNotebook_ammico.ipynb index 292a93d..f6a53da 100644 --- a/build/doctrees/nbsphinx/notebooks/DemoNotebook_ammico.ipynb +++ b/build/doctrees/nbsphinx/notebooks/DemoNotebook_ammico.ipynb @@ -166,7 +166,7 @@ "source": [ "image_dict = ammico.find_files(\n", " # path=\"/content/drive/MyDrive/misinformation-data/\",\n", - " path=data_path.as_posix(),\n", + " path=str(data_path),\n", " limit=15,\n", ")" ] @@ -177,7 +177,30 @@ "source": [ "## Step 2: Inspect the input files using the graphical user interface\n", "A Dash user interface is to select the most suitable options for the analysis, before running a complete analysis on the whole data set. The options for each detector module are explained below in the corresponding sections; for example, different models can be selected that will provide slightly different results. This way, the user can interactively explore which settings provide the most accurate results. In the interface, the nested `image_dict` is passed through the `AnalysisExplorer` class. The interface is run on a specific port which is passed using the `port` keyword; if a port is already in use, it will return an error message, in which case the user should select a different port number. \n", - "The interface opens a dash app inside the Jupyter Notebook and allows selection of the input file in the top left dropdown menu, as well as selection of the detector type in the top right, with options for each detector type as explained below. The output of the detector is shown directly on the right next to the image. This way, the user can directly inspect how updating the options for each detector changes the computed results, and find the best settings for a production run." + "The interface opens a dash app inside the Jupyter Notebook and allows selection of the input file in the top left dropdown menu, as well as selection of the detector type in the top right, with options for each detector type as explained below. The output of the detector is shown directly on the right next to the image. This way, the user can directly inspect how updating the options for each detector changes the computed results, and find the best settings for a production run.\n", + "\n", + "### Ethical disclosure statement\n", + "\n", + "If you want to run an analysis using the EmotionDetector detector type, you have first have to respond to an ethical disclosure statement. This disclosure statement ensures that you only use the full capabilities of the EmotionDetector after you have been made aware of its shortcomings.\n", + "\n", + "For this, answer \"yes\" or \"no\" to the below prompt. This will set an environment variable with the name given as in `accept_disclosure`. To re-run the disclosure prompt, unset the variable by uncommenting the line `os.environ.pop(accept_disclosure, None)`. To permanently set this envorinment variable, add it to your shell via your `.profile` or `.bashr` file.\n", + "\n", + "If the disclosure statement is accepted, the EmotionDetector will perform age, gender and race/ethnicity classification dependend on the provided thresholds. If the disclosure is rejected, only the presence of faces and emotion (if not wearing a mask) is detected." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# respond to the disclosure statement\n", + "# this will set an environment variable for you\n", + "# if you do not want to re-accept the disclosure every time, you can set this environment variable in your shell\n", + "# to re-set the environment variable, uncomment the below line\n", + "accept_disclosure = \"DISCLOSURE_AMMICO\"\n", + "# os.environ.pop(accept_disclosure, None)\n", + "_ = ammico.ethical_disclosure(accept_disclosure=accept_disclosure)" ] }, { @@ -822,7 +845,7 @@ "metadata": {}, "source": [ "## Detection of faces and facial expression analysis\n", - "Faces and facial expressions are detected and analyzed using the `EmotionDetector` class from the `faces` module. Initially, it is detected if faces are present on the image using RetinaFace, followed by analysis if face masks are worn (Face-Mask-Detection). The detection of age, gender, race, and emotions is carried out with deepface.\n", + "Faces and facial expressions are detected and analyzed using the `EmotionDetector` class from the `faces` module. Initially, it is detected if faces are present on the image using RetinaFace, followed by analysis if face masks are worn (Face-Mask-Detection). The detection of age, gender, race, and emotions is carried out with deepface, but only if the disclosure statement has been accepted (see above).\n", "\n", "\n", "\n", @@ -832,10 +855,11 @@ "\n", "From the seven facial expressions, an overall dominating emotion category is identified: negative, positive, or neutral emotion. These are defined with the facial expressions angry, disgust, fear and sad for the negative category, happy for the positive category, and surprise and neutral for the neutral category.\n", "\n", - "A similar threshold as for the emotion recognition is set for the race detection, `race_threshold`, with the default set to 50% so that a confidence for the race above 0.5 only will return a value in the analysis. \n", + "A similar threshold as for the emotion recognition is set for the race/ethnicity, gender and age detection, `race_threshold`, `gender_threshold`, `age_threshold`, with the default set to 50% so that a confidence for the race above 0.5 only will return a value in the analysis. \n", "\n", - "Summarizing, the face detection is carried out using the following method call and keywords, where `emotion_threshold` and \n", - "`race_threshold` are optional:" + "You may also pass the name of the environment variable that determines if you accept or reject the ethical disclosure statement. By default, the variable is named `DISCLOSURE_AMMICO`.\n", + "\n", + "Summarizing, the face detection is carried out using the following method call and keywords, where `emotion_threshold`, `race_threshold`, `gender_threshold`, `age_threshold` are optional:" ] }, { @@ -845,7 +869,9 @@ "outputs": [], "source": [ "for key in image_dict.keys():\n", - " image_dict[key] = ammico.EmotionDetector(image_dict[key], emotion_threshold=50, race_threshold=50).analyse_image()" + " image_dict[key] = ammico.EmotionDetector(image_dict[key], emotion_threshold=50, race_threshold=50,\n", + " gender_threshold=50, age_threshold=50, \n", + " accept_disclosure=\"DISCLOSURE_AMMICO\").analyse_image()" ] }, { diff --git a/build/doctrees/notebooks/DemoNotebook_ammico.doctree b/build/doctrees/notebooks/DemoNotebook_ammico.doctree index 20ea498..46d16f2 100644 Binary files a/build/doctrees/notebooks/DemoNotebook_ammico.doctree and b/build/doctrees/notebooks/DemoNotebook_ammico.doctree differ diff --git a/build/html/_sources/notebooks/DemoNotebook_ammico.ipynb.txt b/build/html/_sources/notebooks/DemoNotebook_ammico.ipynb.txt index 292a93d..f6a53da 100644 --- a/build/html/_sources/notebooks/DemoNotebook_ammico.ipynb.txt +++ b/build/html/_sources/notebooks/DemoNotebook_ammico.ipynb.txt @@ -166,7 +166,7 @@ "source": [ "image_dict = ammico.find_files(\n", " # path=\"/content/drive/MyDrive/misinformation-data/\",\n", - " path=data_path.as_posix(),\n", + " path=str(data_path),\n", " limit=15,\n", ")" ] @@ -177,7 +177,30 @@ "source": [ "## Step 2: Inspect the input files using the graphical user interface\n", "A Dash user interface is to select the most suitable options for the analysis, before running a complete analysis on the whole data set. The options for each detector module are explained below in the corresponding sections; for example, different models can be selected that will provide slightly different results. This way, the user can interactively explore which settings provide the most accurate results. In the interface, the nested `image_dict` is passed through the `AnalysisExplorer` class. The interface is run on a specific port which is passed using the `port` keyword; if a port is already in use, it will return an error message, in which case the user should select a different port number. \n", - "The interface opens a dash app inside the Jupyter Notebook and allows selection of the input file in the top left dropdown menu, as well as selection of the detector type in the top right, with options for each detector type as explained below. The output of the detector is shown directly on the right next to the image. This way, the user can directly inspect how updating the options for each detector changes the computed results, and find the best settings for a production run." + "The interface opens a dash app inside the Jupyter Notebook and allows selection of the input file in the top left dropdown menu, as well as selection of the detector type in the top right, with options for each detector type as explained below. The output of the detector is shown directly on the right next to the image. This way, the user can directly inspect how updating the options for each detector changes the computed results, and find the best settings for a production run.\n", + "\n", + "### Ethical disclosure statement\n", + "\n", + "If you want to run an analysis using the EmotionDetector detector type, you have first have to respond to an ethical disclosure statement. This disclosure statement ensures that you only use the full capabilities of the EmotionDetector after you have been made aware of its shortcomings.\n", + "\n", + "For this, answer \"yes\" or \"no\" to the below prompt. This will set an environment variable with the name given as in `accept_disclosure`. To re-run the disclosure prompt, unset the variable by uncommenting the line `os.environ.pop(accept_disclosure, None)`. To permanently set this envorinment variable, add it to your shell via your `.profile` or `.bashr` file.\n", + "\n", + "If the disclosure statement is accepted, the EmotionDetector will perform age, gender and race/ethnicity classification dependend on the provided thresholds. If the disclosure is rejected, only the presence of faces and emotion (if not wearing a mask) is detected." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# respond to the disclosure statement\n", + "# this will set an environment variable for you\n", + "# if you do not want to re-accept the disclosure every time, you can set this environment variable in your shell\n", + "# to re-set the environment variable, uncomment the below line\n", + "accept_disclosure = \"DISCLOSURE_AMMICO\"\n", + "# os.environ.pop(accept_disclosure, None)\n", + "_ = ammico.ethical_disclosure(accept_disclosure=accept_disclosure)" ] }, { @@ -822,7 +845,7 @@ "metadata": {}, "source": [ "## Detection of faces and facial expression analysis\n", - "Faces and facial expressions are detected and analyzed using the `EmotionDetector` class from the `faces` module. Initially, it is detected if faces are present on the image using RetinaFace, followed by analysis if face masks are worn (Face-Mask-Detection). The detection of age, gender, race, and emotions is carried out with deepface.\n", + "Faces and facial expressions are detected and analyzed using the `EmotionDetector` class from the `faces` module. Initially, it is detected if faces are present on the image using RetinaFace, followed by analysis if face masks are worn (Face-Mask-Detection). The detection of age, gender, race, and emotions is carried out with deepface, but only if the disclosure statement has been accepted (see above).\n", "\n", "\n", "\n", @@ -832,10 +855,11 @@ "\n", "From the seven facial expressions, an overall dominating emotion category is identified: negative, positive, or neutral emotion. These are defined with the facial expressions angry, disgust, fear and sad for the negative category, happy for the positive category, and surprise and neutral for the neutral category.\n", "\n", - "A similar threshold as for the emotion recognition is set for the race detection, `race_threshold`, with the default set to 50% so that a confidence for the race above 0.5 only will return a value in the analysis. \n", + "A similar threshold as for the emotion recognition is set for the race/ethnicity, gender and age detection, `race_threshold`, `gender_threshold`, `age_threshold`, with the default set to 50% so that a confidence for the race above 0.5 only will return a value in the analysis. \n", "\n", - "Summarizing, the face detection is carried out using the following method call and keywords, where `emotion_threshold` and \n", - "`race_threshold` are optional:" + "You may also pass the name of the environment variable that determines if you accept or reject the ethical disclosure statement. By default, the variable is named `DISCLOSURE_AMMICO`.\n", + "\n", + "Summarizing, the face detection is carried out using the following method call and keywords, where `emotion_threshold`, `race_threshold`, `gender_threshold`, `age_threshold` are optional:" ] }, { @@ -845,7 +869,9 @@ "outputs": [], "source": [ "for key in image_dict.keys():\n", - " image_dict[key] = ammico.EmotionDetector(image_dict[key], emotion_threshold=50, race_threshold=50).analyse_image()" + " image_dict[key] = ammico.EmotionDetector(image_dict[key], emotion_threshold=50, race_threshold=50,\n", + " gender_threshold=50, age_threshold=50, \n", + " accept_disclosure=\"DISCLOSURE_AMMICO\").analyse_image()" ] }, { diff --git a/build/html/ammico.html b/build/html/ammico.html index af6f9aa..a9d88fa 100644 --- a/build/html/ammico.html +++ b/build/html/ammico.html @@ -153,6 +153,7 @@
  • deepface_symlink_processor()
  • +
  • ethical_disclosure()
  • color_analysis module