Π­Ρ‚ΠΎΡ‚ ΠΊΠΎΠΌΠΌΠΈΡ‚ содСрТится Π²:
iulusoy 2024-06-12 07:54:34 +00:00
Ρ€ΠΎΠ΄ΠΈΡ‚Π΅Π»ΡŒ 13454943e0
ΠšΠΎΠΌΠΌΠΈΡ‚ 39173f831a
13 ΠΈΠ·ΠΌΠ΅Π½Ρ‘Π½Π½Ρ‹Ρ… Ρ„Π°ΠΉΠ»ΠΎΠ²: 184 Π΄ΠΎΠ±Π°Π²Π»Π΅Π½ΠΈΠΉ ΠΈ 39 ΡƒΠ΄Π°Π»Π΅Π½ΠΈΠΉ

Π”Π²ΠΎΠΈΡ‡Π½Ρ‹Π΅ Π΄Π°Π½Π½Ρ‹Π΅
build/doctrees/ammico.doctree

Π”Π²ΠΎΠΈΡ‡Π½Ρ‹ΠΉ Ρ„Π°ΠΉΠ» Π½Π΅ отобраТаСтся.

Π”Π²ΠΎΠΈΡ‡Π½Ρ‹Π΅ Π΄Π°Π½Π½Ρ‹Π΅
build/doctrees/environment.pickle

Π”Π²ΠΎΠΈΡ‡Π½Ρ‹ΠΉ Ρ„Π°ΠΉΠ» Π½Π΅ отобраТаСтся.

ΠŸΡ€ΠΎΡΠΌΠΎΡ‚Ρ€Π΅Ρ‚ΡŒ Ρ„Π°ΠΉΠ»

@ -166,7 +166,7 @@
"source": [
"image_dict = ammico.find_files(\n",
" # path=\"/content/drive/MyDrive/misinformation-data/\",\n",
" path=data_path.as_posix(),\n",
" path=str(data_path),\n",
" limit=15,\n",
")"
]
@ -177,7 +177,30 @@
"source": [
"## Step 2: Inspect the input files using the graphical user interface\n",
"A Dash user interface is to select the most suitable options for the analysis, before running a complete analysis on the whole data set. The options for each detector module are explained below in the corresponding sections; for example, different models can be selected that will provide slightly different results. This way, the user can interactively explore which settings provide the most accurate results. In the interface, the nested `image_dict` is passed through the `AnalysisExplorer` class. The interface is run on a specific port which is passed using the `port` keyword; if a port is already in use, it will return an error message, in which case the user should select a different port number. \n",
"The interface opens a dash app inside the Jupyter Notebook and allows selection of the input file in the top left dropdown menu, as well as selection of the detector type in the top right, with options for each detector type as explained below. The output of the detector is shown directly on the right next to the image. This way, the user can directly inspect how updating the options for each detector changes the computed results, and find the best settings for a production run."
"The interface opens a dash app inside the Jupyter Notebook and allows selection of the input file in the top left dropdown menu, as well as selection of the detector type in the top right, with options for each detector type as explained below. The output of the detector is shown directly on the right next to the image. This way, the user can directly inspect how updating the options for each detector changes the computed results, and find the best settings for a production run.\n",
"\n",
"### Ethical disclosure statement\n",
"\n",
"If you want to run an analysis using the EmotionDetector detector type, you have first have to respond to an ethical disclosure statement. This disclosure statement ensures that you only use the full capabilities of the EmotionDetector after you have been made aware of its shortcomings.\n",
"\n",
"For this, answer \"yes\" or \"no\" to the below prompt. This will set an environment variable with the name given as in `accept_disclosure`. To re-run the disclosure prompt, unset the variable by uncommenting the line `os.environ.pop(accept_disclosure, None)`. To permanently set this envorinment variable, add it to your shell via your `.profile` or `.bashr` file.\n",
"\n",
"If the disclosure statement is accepted, the EmotionDetector will perform age, gender and race/ethnicity classification dependend on the provided thresholds. If the disclosure is rejected, only the presence of faces and emotion (if not wearing a mask) is detected."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# respond to the disclosure statement\n",
"# this will set an environment variable for you\n",
"# if you do not want to re-accept the disclosure every time, you can set this environment variable in your shell\n",
"# to re-set the environment variable, uncomment the below line\n",
"accept_disclosure = \"DISCLOSURE_AMMICO\"\n",
"# os.environ.pop(accept_disclosure, None)\n",
"_ = ammico.ethical_disclosure(accept_disclosure=accept_disclosure)"
]
},
{
@ -822,7 +845,7 @@
"metadata": {},
"source": [
"## Detection of faces and facial expression analysis\n",
"Faces and facial expressions are detected and analyzed using the `EmotionDetector` class from the `faces` module. Initially, it is detected if faces are present on the image using RetinaFace, followed by analysis if face masks are worn (Face-Mask-Detection). The detection of age, gender, race, and emotions is carried out with deepface.\n",
"Faces and facial expressions are detected and analyzed using the `EmotionDetector` class from the `faces` module. Initially, it is detected if faces are present on the image using RetinaFace, followed by analysis if face masks are worn (Face-Mask-Detection). The detection of age, gender, race, and emotions is carried out with deepface, but only if the disclosure statement has been accepted (see above).\n",
"\n",
"<img src=\"../_static/emotion_detector.png\" width=\"800\" />\n",
"\n",
@ -832,10 +855,11 @@
"\n",
"From the seven facial expressions, an overall dominating emotion category is identified: negative, positive, or neutral emotion. These are defined with the facial expressions angry, disgust, fear and sad for the negative category, happy for the positive category, and surprise and neutral for the neutral category.\n",
"\n",
"A similar threshold as for the emotion recognition is set for the race detection, `race_threshold`, with the default set to 50% so that a confidence for the race above 0.5 only will return a value in the analysis. \n",
"A similar threshold as for the emotion recognition is set for the race/ethnicity, gender and age detection, `race_threshold`, `gender_threshold`, `age_threshold`, with the default set to 50% so that a confidence for the race above 0.5 only will return a value in the analysis. \n",
"\n",
"Summarizing, the face detection is carried out using the following method call and keywords, where `emotion_threshold` and \n",
"`race_threshold` are optional:"
"You may also pass the name of the environment variable that determines if you accept or reject the ethical disclosure statement. By default, the variable is named `DISCLOSURE_AMMICO`.\n",
"\n",
"Summarizing, the face detection is carried out using the following method call and keywords, where `emotion_threshold`, `race_threshold`, `gender_threshold`, `age_threshold` are optional:"
]
},
{
@ -845,7 +869,9 @@
"outputs": [],
"source": [
"for key in image_dict.keys():\n",
" image_dict[key] = ammico.EmotionDetector(image_dict[key], emotion_threshold=50, race_threshold=50).analyse_image()"
" image_dict[key] = ammico.EmotionDetector(image_dict[key], emotion_threshold=50, race_threshold=50,\n",
" gender_threshold=50, age_threshold=50, \n",
" accept_disclosure=\"DISCLOSURE_AMMICO\").analyse_image()"
]
},
{

Π”Π²ΠΎΠΈΡ‡Π½Ρ‹Π΅ Π΄Π°Π½Π½Ρ‹Π΅
build/doctrees/notebooks/DemoNotebook_ammico.doctree

Π”Π²ΠΎΠΈΡ‡Π½Ρ‹ΠΉ Ρ„Π°ΠΉΠ» Π½Π΅ отобраТаСтся.

ΠŸΡ€ΠΎΡΠΌΠΎΡ‚Ρ€Π΅Ρ‚ΡŒ Ρ„Π°ΠΉΠ»

@ -166,7 +166,7 @@
"source": [
"image_dict = ammico.find_files(\n",
" # path=\"/content/drive/MyDrive/misinformation-data/\",\n",
" path=data_path.as_posix(),\n",
" path=str(data_path),\n",
" limit=15,\n",
")"
]
@ -177,7 +177,30 @@
"source": [
"## Step 2: Inspect the input files using the graphical user interface\n",
"A Dash user interface is to select the most suitable options for the analysis, before running a complete analysis on the whole data set. The options for each detector module are explained below in the corresponding sections; for example, different models can be selected that will provide slightly different results. This way, the user can interactively explore which settings provide the most accurate results. In the interface, the nested `image_dict` is passed through the `AnalysisExplorer` class. The interface is run on a specific port which is passed using the `port` keyword; if a port is already in use, it will return an error message, in which case the user should select a different port number. \n",
"The interface opens a dash app inside the Jupyter Notebook and allows selection of the input file in the top left dropdown menu, as well as selection of the detector type in the top right, with options for each detector type as explained below. The output of the detector is shown directly on the right next to the image. This way, the user can directly inspect how updating the options for each detector changes the computed results, and find the best settings for a production run."
"The interface opens a dash app inside the Jupyter Notebook and allows selection of the input file in the top left dropdown menu, as well as selection of the detector type in the top right, with options for each detector type as explained below. The output of the detector is shown directly on the right next to the image. This way, the user can directly inspect how updating the options for each detector changes the computed results, and find the best settings for a production run.\n",
"\n",
"### Ethical disclosure statement\n",
"\n",
"If you want to run an analysis using the EmotionDetector detector type, you have first have to respond to an ethical disclosure statement. This disclosure statement ensures that you only use the full capabilities of the EmotionDetector after you have been made aware of its shortcomings.\n",
"\n",
"For this, answer \"yes\" or \"no\" to the below prompt. This will set an environment variable with the name given as in `accept_disclosure`. To re-run the disclosure prompt, unset the variable by uncommenting the line `os.environ.pop(accept_disclosure, None)`. To permanently set this envorinment variable, add it to your shell via your `.profile` or `.bashr` file.\n",
"\n",
"If the disclosure statement is accepted, the EmotionDetector will perform age, gender and race/ethnicity classification dependend on the provided thresholds. If the disclosure is rejected, only the presence of faces and emotion (if not wearing a mask) is detected."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# respond to the disclosure statement\n",
"# this will set an environment variable for you\n",
"# if you do not want to re-accept the disclosure every time, you can set this environment variable in your shell\n",
"# to re-set the environment variable, uncomment the below line\n",
"accept_disclosure = \"DISCLOSURE_AMMICO\"\n",
"# os.environ.pop(accept_disclosure, None)\n",
"_ = ammico.ethical_disclosure(accept_disclosure=accept_disclosure)"
]
},
{
@ -822,7 +845,7 @@
"metadata": {},
"source": [
"## Detection of faces and facial expression analysis\n",
"Faces and facial expressions are detected and analyzed using the `EmotionDetector` class from the `faces` module. Initially, it is detected if faces are present on the image using RetinaFace, followed by analysis if face masks are worn (Face-Mask-Detection). The detection of age, gender, race, and emotions is carried out with deepface.\n",
"Faces and facial expressions are detected and analyzed using the `EmotionDetector` class from the `faces` module. Initially, it is detected if faces are present on the image using RetinaFace, followed by analysis if face masks are worn (Face-Mask-Detection). The detection of age, gender, race, and emotions is carried out with deepface, but only if the disclosure statement has been accepted (see above).\n",
"\n",
"<img src=\"../_static/emotion_detector.png\" width=\"800\" />\n",
"\n",
@ -832,10 +855,11 @@
"\n",
"From the seven facial expressions, an overall dominating emotion category is identified: negative, positive, or neutral emotion. These are defined with the facial expressions angry, disgust, fear and sad for the negative category, happy for the positive category, and surprise and neutral for the neutral category.\n",
"\n",
"A similar threshold as for the emotion recognition is set for the race detection, `race_threshold`, with the default set to 50% so that a confidence for the race above 0.5 only will return a value in the analysis. \n",
"A similar threshold as for the emotion recognition is set for the race/ethnicity, gender and age detection, `race_threshold`, `gender_threshold`, `age_threshold`, with the default set to 50% so that a confidence for the race above 0.5 only will return a value in the analysis. \n",
"\n",
"Summarizing, the face detection is carried out using the following method call and keywords, where `emotion_threshold` and \n",
"`race_threshold` are optional:"
"You may also pass the name of the environment variable that determines if you accept or reject the ethical disclosure statement. By default, the variable is named `DISCLOSURE_AMMICO`.\n",
"\n",
"Summarizing, the face detection is carried out using the following method call and keywords, where `emotion_threshold`, `race_threshold`, `gender_threshold`, `age_threshold` are optional:"
]
},
{
@ -845,7 +869,9 @@
"outputs": [],
"source": [
"for key in image_dict.keys():\n",
" image_dict[key] = ammico.EmotionDetector(image_dict[key], emotion_threshold=50, race_threshold=50).analyse_image()"
" image_dict[key] = ammico.EmotionDetector(image_dict[key], emotion_threshold=50, race_threshold=50,\n",
" gender_threshold=50, age_threshold=50, \n",
" accept_disclosure=\"DISCLOSURE_AMMICO\").analyse_image()"
]
},
{

ΠŸΡ€ΠΎΡΠΌΠΎΡ‚Ρ€Π΅Ρ‚ΡŒ Ρ„Π°ΠΉΠ»

@ -153,6 +153,7 @@
</ul>
</li>
<li class="toctree-l3"><a class="reference internal" href="#faces.deepface_symlink_processor"><code class="docutils literal notranslate"><span class="pre">deepface_symlink_processor()</span></code></a></li>
<li class="toctree-l3"><a class="reference internal" href="#faces.ethical_disclosure"><code class="docutils literal notranslate"><span class="pre">ethical_disclosure()</span></code></a></li>
</ul>
</li>
<li class="toctree-l2"><a class="reference internal" href="#module-colors">color_analysis module</a><ul>
@ -1187,7 +1188,7 @@
<span id="faces-module"></span><h1>faces module<a class="headerlink" href="#module-faces" title="Link to this heading"></a></h1>
<dl class="py class">
<dt class="sig sig-object py" id="faces.EmotionDetector">
<em class="property"><span class="pre">class</span><span class="w"> </span></em><span class="sig-prename descclassname"><span class="pre">faces.</span></span><span class="sig-name descname"><span class="pre">EmotionDetector</span></span><span class="sig-paren">(</span><em class="sig-param"><span class="n"><span class="pre">subdict</span></span><span class="p"><span class="pre">:</span></span><span class="w"> </span><span class="n"><span class="pre">dict</span></span></em>, <em class="sig-param"><span class="n"><span class="pre">emotion_threshold</span></span><span class="p"><span class="pre">:</span></span><span class="w"> </span><span class="n"><span class="pre">float</span></span><span class="w"> </span><span class="o"><span class="pre">=</span></span><span class="w"> </span><span class="default_value"><span class="pre">50.0</span></span></em>, <em class="sig-param"><span class="n"><span class="pre">race_threshold</span></span><span class="p"><span class="pre">:</span></span><span class="w"> </span><span class="n"><span class="pre">float</span></span><span class="w"> </span><span class="o"><span class="pre">=</span></span><span class="w"> </span><span class="default_value"><span class="pre">50.0</span></span></em><span class="sig-paren">)</span><a class="headerlink" href="#faces.EmotionDetector" title="Link to this definition"></a></dt>
<em class="property"><span class="pre">class</span><span class="w"> </span></em><span class="sig-prename descclassname"><span class="pre">faces.</span></span><span class="sig-name descname"><span class="pre">EmotionDetector</span></span><span class="sig-paren">(</span><em class="sig-param"><span class="n"><span class="pre">subdict</span></span><span class="p"><span class="pre">:</span></span><span class="w"> </span><span class="n"><span class="pre">dict</span></span></em>, <em class="sig-param"><span class="n"><span class="pre">emotion_threshold</span></span><span class="p"><span class="pre">:</span></span><span class="w"> </span><span class="n"><span class="pre">float</span></span><span class="w"> </span><span class="o"><span class="pre">=</span></span><span class="w"> </span><span class="default_value"><span class="pre">50.0</span></span></em>, <em class="sig-param"><span class="n"><span class="pre">race_threshold</span></span><span class="p"><span class="pre">:</span></span><span class="w"> </span><span class="n"><span class="pre">float</span></span><span class="w"> </span><span class="o"><span class="pre">=</span></span><span class="w"> </span><span class="default_value"><span class="pre">50.0</span></span></em>, <em class="sig-param"><span class="n"><span class="pre">gender_threshold</span></span><span class="p"><span class="pre">:</span></span><span class="w"> </span><span class="n"><span class="pre">float</span></span><span class="w"> </span><span class="o"><span class="pre">=</span></span><span class="w"> </span><span class="default_value"><span class="pre">50.0</span></span></em>, <em class="sig-param"><span class="n"><span class="pre">age_threshold</span></span><span class="p"><span class="pre">:</span></span><span class="w"> </span><span class="n"><span class="pre">float</span></span><span class="w"> </span><span class="o"><span class="pre">=</span></span><span class="w"> </span><span class="default_value"><span class="pre">50.0</span></span></em>, <em class="sig-param"><span class="n"><span class="pre">accept_disclosure</span></span><span class="p"><span class="pre">:</span></span><span class="w"> </span><span class="n"><span class="pre">str</span></span><span class="w"> </span><span class="o"><span class="pre">=</span></span><span class="w"> </span><span class="default_value"><span class="pre">'DISCLOSURE_AMMICO'</span></span></em><span class="sig-paren">)</span><a class="headerlink" href="#faces.EmotionDetector" title="Link to this definition"></a></dt>
<dd><p>Bases: <code class="xref py py-class docutils literal notranslate"><span class="pre">AnalysisMethod</span></code></p>
<dl class="py method">
<dt class="sig sig-object py" id="faces.EmotionDetector.analyse_image">
@ -1271,6 +1272,17 @@
<span class="sig-prename descclassname"><span class="pre">faces.</span></span><span class="sig-name descname"><span class="pre">deepface_symlink_processor</span></span><span class="sig-paren">(</span><em class="sig-param"><span class="n"><span class="pre">name</span></span></em><span class="sig-paren">)</span><a class="headerlink" href="#faces.deepface_symlink_processor" title="Link to this definition"></a></dt>
<dd></dd></dl>
<dl class="py function">
<dt class="sig sig-object py" id="faces.ethical_disclosure">
<span class="sig-prename descclassname"><span class="pre">faces.</span></span><span class="sig-name descname"><span class="pre">ethical_disclosure</span></span><span class="sig-paren">(</span><em class="sig-param"><span class="n"><span class="pre">accept_disclosure</span></span><span class="p"><span class="pre">:</span></span><span class="w"> </span><span class="n"><span class="pre">str</span></span><span class="w"> </span><span class="o"><span class="pre">=</span></span><span class="w"> </span><span class="default_value"><span class="pre">'DISCLOSURE_AMMICO'</span></span></em><span class="sig-paren">)</span><a class="headerlink" href="#faces.ethical_disclosure" title="Link to this definition"></a></dt>
<dd><p>Asks the user to accept the ethical disclosure.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters<span class="colon">:</span></dt>
<dd class="field-odd"><p><strong>accept_disclosure</strong> (<em>str</em>) – The name of the disclosure variable (default: β€œDISCLOSURE_AMMICO”).</p>
</dd>
</dl>
</dd></dl>
</section>
<section id="module-colors">
<span id="color-analysis-module"></span><h1>color_analysis module<a class="headerlink" href="#module-colors" title="Link to this heading"></a></h1>

ΠŸΡ€ΠΎΡΠΌΠΎΡ‚Ρ€Π΅Ρ‚ΡŒ Ρ„Π°ΠΉΠ»

@ -218,6 +218,8 @@
<table style="width: 100%" class="indextable genindextable"><tr>
<td style="width: 33%; vertical-align: top;"><ul>
<li><a href="ammico.html#faces.EmotionDetector">EmotionDetector (class in faces)</a>
</li>
<li><a href="ammico.html#faces.ethical_disclosure">ethical_disclosure() (in module faces)</a>
</li>
<li><a href="ammico.html#multimodal_search.MultimodalSearch.extract_image_features_basic">extract_image_features_basic() (multimodal_search.MultimodalSearch method)</a>
</li>

ΠŸΡ€ΠΎΡΠΌΠΎΡ‚Ρ€Π΅Ρ‚ΡŒ Ρ„Π°ΠΉΠ»

@ -194,6 +194,7 @@
</ul>
</li>
<li class="toctree-l2"><a class="reference internal" href="ammico.html#faces.deepface_symlink_processor"><code class="docutils literal notranslate"><span class="pre">deepface_symlink_processor()</span></code></a></li>
<li class="toctree-l2"><a class="reference internal" href="ammico.html#faces.ethical_disclosure"><code class="docutils literal notranslate"><span class="pre">ethical_disclosure()</span></code></a></li>
</ul>
</li>
<li class="toctree-l1"><a class="reference internal" href="ammico.html#module-colors">color_analysis module</a><ul>

ΠŸΡ€ΠΎΡΠΌΠΎΡ‚Ρ€Π΅Ρ‚ΡŒ Ρ„Π°ΠΉΠ»

@ -60,7 +60,10 @@
</li>
<li class="toctree-l1"><a class="reference internal" href="#Step-0:-Create-and-set-a-Google-Cloud-Vision-Key">Step 0: Create and set a Google Cloud Vision Key</a></li>
<li class="toctree-l1"><a class="reference internal" href="#Step-1:-Read-your-data-into-AMMICO">Step 1: Read your data into AMMICO</a><ul>
<li class="toctree-l2"><a class="reference internal" href="#Step-2:-Inspect-the-input-files-using-the-graphical-user-interface">Step 2: Inspect the input files using the graphical user interface</a></li>
<li class="toctree-l2"><a class="reference internal" href="#Step-2:-Inspect-the-input-files-using-the-graphical-user-interface">Step 2: Inspect the input files using the graphical user interface</a><ul>
<li class="toctree-l3"><a class="reference internal" href="#Ethical-disclosure-statement">Ethical disclosure statement</a></li>
</ul>
</li>
<li class="toctree-l2"><a class="reference internal" href="#Step-3:-Analyze-all-images">Step 3: Analyze all images</a></li>
<li class="toctree-l2"><a class="reference internal" href="#Step-4:-Convert-analysis-output-to-pandas-dataframe-and-write-csv">Step 4: Convert analysis output to pandas dataframe and write csv</a></li>
<li class="toctree-l2"><a class="reference internal" href="#Read-in-a-csv-file-containing-text-and-translating/analysing-the-text">Read in a csv file containing text and translating/analysing the text</a></li>
@ -257,7 +260,7 @@ tf.ones([2, 2])
</div>
<div class="input_area highlight-ipython3 notranslate"><div class="highlight"><pre><span></span><span class="n">image_dict</span> <span class="o">=</span> <span class="n">ammico</span><span class="o">.</span><span class="n">find_files</span><span class="p">(</span>
<span class="c1"># path=&quot;/content/drive/MyDrive/misinformation-data/&quot;,</span>
<span class="n">path</span><span class="o">=</span><span class="n">data_path</span><span class="o">.</span><span class="n">as_posix</span><span class="p">(),</span>
<span class="n">path</span><span class="o">=</span><span class="nb">str</span><span class="p">(</span><span class="n">data_path</span><span class="p">),</span>
<span class="n">limit</span><span class="o">=</span><span class="mi">15</span><span class="p">,</span>
<span class="p">)</span>
</pre></div>
@ -268,6 +271,25 @@ tf.ones([2, 2])
<p>A Dash user interface is to select the most suitable options for the analysis, before running a complete analysis on the whole data set. The options for each detector module are explained below in the corresponding sections; for example, different models can be selected that will provide slightly different results. This way, the user can interactively explore which settings provide the most accurate results. In the interface, the nested <code class="docutils literal notranslate"><span class="pre">image_dict</span></code> is passed through the <code class="docutils literal notranslate"><span class="pre">AnalysisExplorer</span></code>
class. The interface is run on a specific port which is passed using the <code class="docutils literal notranslate"><span class="pre">port</span></code> keyword; if a port is already in use, it will return an error message, in which case the user should select a different port number. The interface opens a dash app inside the Jupyter Notebook and allows selection of the input file in the top left dropdown menu, as well as selection of the detector type in the top right, with options for each detector type as explained below. The output of the detector is shown
directly on the right next to the image. This way, the user can directly inspect how updating the options for each detector changes the computed results, and find the best settings for a production run.</p>
<section id="Ethical-disclosure-statement">
<h3>Ethical disclosure statement<a class="headerlink" href="#Ethical-disclosure-statement" title="Link to this heading"></a></h3>
<p>If you want to run an analysis using the EmotionDetector detector type, you have first have to respond to an ethical disclosure statement. This disclosure statement ensures that you only use the full capabilities of the EmotionDetector after you have been made aware of its shortcomings.</p>
<p>For this, answer β€œyes” or β€œno” to the below prompt. This will set an environment variable with the name given as in <code class="docutils literal notranslate"><span class="pre">accept_disclosure</span></code>. To re-run the disclosure prompt, unset the variable by uncommenting the line <code class="docutils literal notranslate"><span class="pre">os.environ.pop(accept_disclosure,</span> <span class="pre">None)</span></code>. To permanently set this envorinment variable, add it to your shell via your <code class="docutils literal notranslate"><span class="pre">.profile</span></code> or <code class="docutils literal notranslate"><span class="pre">.bashr</span></code> file.</p>
<p>If the disclosure statement is accepted, the EmotionDetector will perform age, gender and race/ethnicity classification dependend on the provided thresholds. If the disclosure is rejected, only the presence of faces and emotion (if not wearing a mask) is detected.</p>
<div class="nbinput nblast docutils container">
<div class="prompt highlight-none notranslate"><div class="highlight"><pre><span></span>[ ]:
</pre></div>
</div>
<div class="input_area highlight-ipython3 notranslate"><div class="highlight"><pre><span></span><span class="c1"># respond to the disclosure statement</span>
<span class="c1"># this will set an environment variable for you</span>
<span class="c1"># if you do not want to re-accept the disclosure every time, you can set this environment variable in your shell</span>
<span class="c1"># to re-set the environment variable, uncomment the below line</span>
<span class="n">accept_disclosure</span> <span class="o">=</span> <span class="s2">&quot;DISCLOSURE_AMMICO&quot;</span>
<span class="c1"># os.environ.pop(accept_disclosure, None)</span>
<span class="n">_</span> <span class="o">=</span> <span class="n">ammico</span><span class="o">.</span><span class="n">ethical_disclosure</span><span class="p">(</span><span class="n">accept_disclosure</span><span class="o">=</span><span class="n">accept_disclosure</span><span class="p">)</span>
</pre></div>
</div>
</div>
<div class="nbinput nblast docutils container">
<div class="prompt highlight-none notranslate"><div class="highlight"><pre><span></span>[ ]:
</pre></div>
@ -278,6 +300,7 @@ directly on the right next to the image. This way, the user can directly inspect
</div>
</div>
</section>
</section>
<section id="Step-3:-Analyze-all-images">
<h2>Step 3: Analyze all images<a class="headerlink" href="#Step-3:-Analyze-all-images" title="Link to this heading"></a></h2>
<p>The analysis can be run in production on all images in the data set. Depending on the size of the data set and the computing resources available, this can take some time.</p>
@ -470,7 +493,7 @@ directly on the right next to the image. This way, the user can directly inspect
<section id="The-detector-modules">
<h1>The detector modules<a class="headerlink" href="#The-detector-modules" title="Link to this heading"></a></h1>
<p>The different detector modules with their options are explained in more detail in this section. ## Text detector Text on the images can be extracted using the <code class="docutils literal notranslate"><span class="pre">TextDetector</span></code> class (<code class="docutils literal notranslate"><span class="pre">text</span></code> module). The text is initally extracted using the Google Cloud Vision API and then translated into English with googletrans. The translated text is cleaned of whitespace, linebreaks, and numbers using Python syntax and spaCy.</p>
<p><img alt="0c75cc0bcb7d4081982b65bb456f1ab4" class="no-scaled-link" src="../_images/text_detector.png" style="width: 800px;" /></p>
<p><img alt="f1c211721e034ae7ac3eadfd8c20e9c6" class="no-scaled-link" src="../_images/text_detector.png" style="width: 800px;" /></p>
<p>The user can set if the text should be further summarized, and analyzed for sentiment and named entity recognition, by setting the keyword <code class="docutils literal notranslate"><span class="pre">analyse_text</span></code> to <code class="docutils literal notranslate"><span class="pre">True</span></code> (the default is <code class="docutils literal notranslate"><span class="pre">False</span></code>). If set, the transformers pipeline is used for each of these tasks, with the default models as of 03/2023. Other models can be selected by setting the optional keyword <code class="docutils literal notranslate"><span class="pre">model_names</span></code> to a list of selected models, on for each task:
<code class="docutils literal notranslate"><span class="pre">model_names=[&quot;sshleifer/distilbart-cnn-12-6&quot;,</span> <span class="pre">&quot;distilbert-base-uncased-finetuned-sst-2-english&quot;,</span> <span class="pre">&quot;dbmdz/bert-large-cased-finetuned-conll03-english&quot;]</span></code> for summary, sentiment, and ner. To be even more specific, revision numbers can also be selected by specifying the optional keyword <code class="docutils literal notranslate"><span class="pre">revision_numbers</span></code> to a list of revision numbers for each model, for example <code class="docutils literal notranslate"><span class="pre">revision_numbers=[&quot;a4f8f3e&quot;,</span> <span class="pre">&quot;af0f99b&quot;,</span> <span class="pre">&quot;f2482bf&quot;]</span></code>.</p>
<p>Please note that for the Google Cloud Vision API (the TextDetector class) you need to set a key in order to process the images. This key is ideally set as an environment variable using for example</p>
@ -552,7 +575,7 @@ directly on the right next to the image. This way, the user can directly inspect
<section id="Image-summary-and-query">
<h2>Image summary and query<a class="headerlink" href="#Image-summary-and-query" title="Link to this heading"></a></h2>
<p>The <code class="docutils literal notranslate"><span class="pre">SummaryDetector</span></code> can be used to generate image captions (<code class="docutils literal notranslate"><span class="pre">summary</span></code>) as well as visual question answering (<code class="docutils literal notranslate"><span class="pre">VQA</span></code>).</p>
<p><img alt="df3e5d6617b3447eb877e276342bc93b" class="no-scaled-link" src="../_images/summary_detector.png" style="width: 800px;" /></p>
<p><img alt="7b53d91a7e9d46169d6f7b53c3c99fda" class="no-scaled-link" src="../_images/summary_detector.png" style="width: 800px;" /></p>
<p>This module is based on the <a class="reference external" href="https://github.com/salesforce/LAVIS">LAVIS</a> library. Since the models can be quite large, an initial object is created which will load the necessary models into RAM/VRAM and then use them in the analysis. The user can specify the type of analysis to be performed using the <code class="docutils literal notranslate"><span class="pre">analysis_type</span></code> keyword. Setting it to <code class="docutils literal notranslate"><span class="pre">summary</span></code> will generate a caption (summary), <code class="docutils literal notranslate"><span class="pre">questions</span></code> will prepare answers (VQA) to a list of questions as set by the user,
<code class="docutils literal notranslate"><span class="pre">summary_and_questions</span></code> will do both. Note that the desired analysis type needs to be set here in the initialization of the detector object, and not when running the analysis for each image; the same holds true for the selected model.</p>
<p>The implemented models are listed below.</p>
@ -804,22 +827,25 @@ directly on the right next to the image. This way, the user can directly inspect
</section>
<section id="Detection-of-faces-and-facial-expression-analysis">
<h2>Detection of faces and facial expression analysis<a class="headerlink" href="#Detection-of-faces-and-facial-expression-analysis" title="Link to this heading"></a></h2>
<p>Faces and facial expressions are detected and analyzed using the <code class="docutils literal notranslate"><span class="pre">EmotionDetector</span></code> class from the <code class="docutils literal notranslate"><span class="pre">faces</span></code> module. Initially, it is detected if faces are present on the image using RetinaFace, followed by analysis if face masks are worn (Face-Mask-Detection). The detection of age, gender, race, and emotions is carried out with deepface.</p>
<p><img alt="e4ff58bce9a849e1b3cd5e3b9b08a5ba" class="no-scaled-link" src="../_images/emotion_detector.png" style="width: 800px;" /></p>
<p>Faces and facial expressions are detected and analyzed using the <code class="docutils literal notranslate"><span class="pre">EmotionDetector</span></code> class from the <code class="docutils literal notranslate"><span class="pre">faces</span></code> module. Initially, it is detected if faces are present on the image using RetinaFace, followed by analysis if face masks are worn (Face-Mask-Detection). The detection of age, gender, race, and emotions is carried out with deepface, but only if the disclosure statement has been accepted (see above).</p>
<p><img alt="01966d367ad4488991e78ad7c9af8c4f" class="no-scaled-link" src="../_images/emotion_detector.png" style="width: 800px;" /></p>
<p>Depending on the features found on the image, the face detection module returns a different analysis content: If no faces are found on the image, all further steps are skipped and the result <code class="docutils literal notranslate"><span class="pre">&quot;face&quot;:</span> <span class="pre">&quot;No&quot;,</span> <span class="pre">&quot;multiple_faces&quot;:</span> <span class="pre">&quot;No&quot;,</span> <span class="pre">&quot;no_faces&quot;:</span> <span class="pre">0,</span> <span class="pre">&quot;wears_mask&quot;:</span> <span class="pre">[&quot;No&quot;],</span> <span class="pre">&quot;age&quot;:</span> <span class="pre">[None],</span> <span class="pre">&quot;gender&quot;:</span> <span class="pre">[None],</span> <span class="pre">&quot;race&quot;:</span> <span class="pre">[None],</span> <span class="pre">&quot;emotion&quot;:</span> <span class="pre">[None],</span> <span class="pre">&quot;emotion</span> <span class="pre">(category)&quot;:</span> <span class="pre">[None]</span></code> is returned. If one or several faces are found, up to three faces are analyzed if they are partially concealed by a face mask. If
yes, only age and gender are detected; if no, also race, emotion, and dominant emotion are detected. In case of the latter, the output could look like this: <code class="docutils literal notranslate"><span class="pre">&quot;face&quot;:</span> <span class="pre">&quot;Yes&quot;,</span> <span class="pre">&quot;multiple_faces&quot;:</span> <span class="pre">&quot;Yes&quot;,</span> <span class="pre">&quot;no_faces&quot;:</span> <span class="pre">2,</span> <span class="pre">&quot;wears_mask&quot;:</span> <span class="pre">[&quot;No&quot;,</span> <span class="pre">&quot;No&quot;],</span> <span class="pre">&quot;age&quot;:</span> <span class="pre">[27,</span> <span class="pre">28],</span> <span class="pre">&quot;gender&quot;:</span> <span class="pre">[&quot;Man&quot;,</span> <span class="pre">&quot;Man&quot;],</span> <span class="pre">&quot;race&quot;:</span> <span class="pre">[&quot;asian&quot;,</span> <span class="pre">None],</span> <span class="pre">&quot;emotion&quot;:</span> <span class="pre">[&quot;angry&quot;,</span> <span class="pre">&quot;neutral&quot;],</span> <span class="pre">&quot;emotion</span> <span class="pre">(category)&quot;:</span> <span class="pre">[&quot;Negative&quot;,</span> <span class="pre">&quot;Neutral&quot;]</span></code>, where for the two faces that are detected (given by <code class="docutils literal notranslate"><span class="pre">no_faces</span></code>), some of the values are returned as a list
with the first item for the first (largest) face and the second item for the second (smaller) face (for example, <code class="docutils literal notranslate"><span class="pre">&quot;emotion&quot;</span></code> returns a list <code class="docutils literal notranslate"><span class="pre">[&quot;angry&quot;,</span> <span class="pre">&quot;neutral&quot;]</span></code> signifying the first face expressing anger, and the second face having a neutral expression).</p>
<p>The emotion detection reports the seven facial expressions angry, fear, neutral, sad, disgust, happy and surprise. These emotions are assigned based on the returned confidence of the model (between 0 and 1), with a high confidence signifying a high likelihood of the detected emotion being correct. Emotion recognition is not an easy task, even for a human; therefore, we have added a keyword <code class="docutils literal notranslate"><span class="pre">emotion_threshold</span></code> signifying the % value above which an emotion is counted as being detected. The
default is set to 50%, so that a confidence above 0.5 results in an emotion being assigned. If the confidence is lower, no emotion is assigned.</p>
<p>From the seven facial expressions, an overall dominating emotion category is identified: negative, positive, or neutral emotion. These are defined with the facial expressions angry, disgust, fear and sad for the negative category, happy for the positive category, and surprise and neutral for the neutral category.</p>
<p>A similar threshold as for the emotion recognition is set for the race detection, <code class="docutils literal notranslate"><span class="pre">race_threshold</span></code>, with the default set to 50% so that a confidence for the race above 0.5 only will return a value in the analysis.</p>
<p>Summarizing, the face detection is carried out using the following method call and keywords, where <code class="docutils literal notranslate"><span class="pre">emotion_threshold</span></code> and <code class="docutils literal notranslate"><span class="pre">race_threshold</span></code> are optional:</p>
<p>A similar threshold as for the emotion recognition is set for the race/ethnicity, gender and age detection, <code class="docutils literal notranslate"><span class="pre">race_threshold</span></code>, <code class="docutils literal notranslate"><span class="pre">gender_threshold</span></code>, <code class="docutils literal notranslate"><span class="pre">age_threshold</span></code>, with the default set to 50% so that a confidence for the race above 0.5 only will return a value in the analysis.</p>
<p>You may also pass the name of the environment variable that determines if you accept or reject the ethical disclosure statement. By default, the variable is named <code class="docutils literal notranslate"><span class="pre">DISCLOSURE_AMMICO</span></code>.</p>
<p>Summarizing, the face detection is carried out using the following method call and keywords, where <code class="docutils literal notranslate"><span class="pre">emotion_threshold</span></code>, <code class="docutils literal notranslate"><span class="pre">race_threshold</span></code>, <code class="docutils literal notranslate"><span class="pre">gender_threshold</span></code>, <code class="docutils literal notranslate"><span class="pre">age_threshold</span></code> are optional:</p>
<div class="nbinput nblast docutils container">
<div class="prompt highlight-none notranslate"><div class="highlight"><pre><span></span>[ ]:
</pre></div>
</div>
<div class="input_area highlight-ipython3 notranslate"><div class="highlight"><pre><span></span><span class="k">for</span> <span class="n">key</span> <span class="ow">in</span> <span class="n">image_dict</span><span class="o">.</span><span class="n">keys</span><span class="p">():</span>
<span class="n">image_dict</span><span class="p">[</span><span class="n">key</span><span class="p">]</span> <span class="o">=</span> <span class="n">ammico</span><span class="o">.</span><span class="n">EmotionDetector</span><span class="p">(</span><span class="n">image_dict</span><span class="p">[</span><span class="n">key</span><span class="p">],</span> <span class="n">emotion_threshold</span><span class="o">=</span><span class="mi">50</span><span class="p">,</span> <span class="n">race_threshold</span><span class="o">=</span><span class="mi">50</span><span class="p">)</span><span class="o">.</span><span class="n">analyse_image</span><span class="p">()</span>
<span class="n">image_dict</span><span class="p">[</span><span class="n">key</span><span class="p">]</span> <span class="o">=</span> <span class="n">ammico</span><span class="o">.</span><span class="n">EmotionDetector</span><span class="p">(</span><span class="n">image_dict</span><span class="p">[</span><span class="n">key</span><span class="p">],</span> <span class="n">emotion_threshold</span><span class="o">=</span><span class="mi">50</span><span class="p">,</span> <span class="n">race_threshold</span><span class="o">=</span><span class="mi">50</span><span class="p">,</span>
<span class="n">gender_threshold</span><span class="o">=</span><span class="mi">50</span><span class="p">,</span> <span class="n">age_threshold</span><span class="o">=</span><span class="mi">50</span><span class="p">,</span>
<span class="n">accept_disclosure</span><span class="o">=</span><span class="s2">&quot;DISCLOSURE_AMMICO&quot;</span><span class="p">)</span><span class="o">.</span><span class="n">analyse_image</span><span class="p">()</span>
</pre></div>
</div>
</div>

ΠŸΡ€ΠΎΡΠΌΠΎΡ‚Ρ€Π΅Ρ‚ΡŒ Ρ„Π°ΠΉΠ»

@ -166,7 +166,7 @@
"source": [
"image_dict = ammico.find_files(\n",
" # path=\"/content/drive/MyDrive/misinformation-data/\",\n",
" path=data_path.as_posix(),\n",
" path=str(data_path),\n",
" limit=15,\n",
")"
]
@ -177,7 +177,30 @@
"source": [
"## Step 2: Inspect the input files using the graphical user interface\n",
"A Dash user interface is to select the most suitable options for the analysis, before running a complete analysis on the whole data set. The options for each detector module are explained below in the corresponding sections; for example, different models can be selected that will provide slightly different results. This way, the user can interactively explore which settings provide the most accurate results. In the interface, the nested `image_dict` is passed through the `AnalysisExplorer` class. The interface is run on a specific port which is passed using the `port` keyword; if a port is already in use, it will return an error message, in which case the user should select a different port number. \n",
"The interface opens a dash app inside the Jupyter Notebook and allows selection of the input file in the top left dropdown menu, as well as selection of the detector type in the top right, with options for each detector type as explained below. The output of the detector is shown directly on the right next to the image. This way, the user can directly inspect how updating the options for each detector changes the computed results, and find the best settings for a production run."
"The interface opens a dash app inside the Jupyter Notebook and allows selection of the input file in the top left dropdown menu, as well as selection of the detector type in the top right, with options for each detector type as explained below. The output of the detector is shown directly on the right next to the image. This way, the user can directly inspect how updating the options for each detector changes the computed results, and find the best settings for a production run.\n",
"\n",
"### Ethical disclosure statement\n",
"\n",
"If you want to run an analysis using the EmotionDetector detector type, you have first have to respond to an ethical disclosure statement. This disclosure statement ensures that you only use the full capabilities of the EmotionDetector after you have been made aware of its shortcomings.\n",
"\n",
"For this, answer \"yes\" or \"no\" to the below prompt. This will set an environment variable with the name given as in `accept_disclosure`. To re-run the disclosure prompt, unset the variable by uncommenting the line `os.environ.pop(accept_disclosure, None)`. To permanently set this envorinment variable, add it to your shell via your `.profile` or `.bashr` file.\n",
"\n",
"If the disclosure statement is accepted, the EmotionDetector will perform age, gender and race/ethnicity classification dependend on the provided thresholds. If the disclosure is rejected, only the presence of faces and emotion (if not wearing a mask) is detected."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# respond to the disclosure statement\n",
"# this will set an environment variable for you\n",
"# if you do not want to re-accept the disclosure every time, you can set this environment variable in your shell\n",
"# to re-set the environment variable, uncomment the below line\n",
"accept_disclosure = \"DISCLOSURE_AMMICO\"\n",
"# os.environ.pop(accept_disclosure, None)\n",
"_ = ammico.ethical_disclosure(accept_disclosure=accept_disclosure)"
]
},
{
@ -822,7 +845,7 @@
"metadata": {},
"source": [
"## Detection of faces and facial expression analysis\n",
"Faces and facial expressions are detected and analyzed using the `EmotionDetector` class from the `faces` module. Initially, it is detected if faces are present on the image using RetinaFace, followed by analysis if face masks are worn (Face-Mask-Detection). The detection of age, gender, race, and emotions is carried out with deepface.\n",
"Faces and facial expressions are detected and analyzed using the `EmotionDetector` class from the `faces` module. Initially, it is detected if faces are present on the image using RetinaFace, followed by analysis if face masks are worn (Face-Mask-Detection). The detection of age, gender, race, and emotions is carried out with deepface, but only if the disclosure statement has been accepted (see above).\n",
"\n",
"<img src=\"../_static/emotion_detector.png\" width=\"800\" />\n",
"\n",
@ -832,10 +855,11 @@
"\n",
"From the seven facial expressions, an overall dominating emotion category is identified: negative, positive, or neutral emotion. These are defined with the facial expressions angry, disgust, fear and sad for the negative category, happy for the positive category, and surprise and neutral for the neutral category.\n",
"\n",
"A similar threshold as for the emotion recognition is set for the race detection, `race_threshold`, with the default set to 50% so that a confidence for the race above 0.5 only will return a value in the analysis. \n",
"A similar threshold as for the emotion recognition is set for the race/ethnicity, gender and age detection, `race_threshold`, `gender_threshold`, `age_threshold`, with the default set to 50% so that a confidence for the race above 0.5 only will return a value in the analysis. \n",
"\n",
"Summarizing, the face detection is carried out using the following method call and keywords, where `emotion_threshold` and \n",
"`race_threshold` are optional:"
"You may also pass the name of the environment variable that determines if you accept or reject the ethical disclosure statement. By default, the variable is named `DISCLOSURE_AMMICO`.\n",
"\n",
"Summarizing, the face detection is carried out using the following method call and keywords, where `emotion_threshold`, `race_threshold`, `gender_threshold`, `age_threshold` are optional:"
]
},
{
@ -845,7 +869,9 @@
"outputs": [],
"source": [
"for key in image_dict.keys():\n",
" image_dict[key] = ammico.EmotionDetector(image_dict[key], emotion_threshold=50, race_threshold=50).analyse_image()"
" image_dict[key] = ammico.EmotionDetector(image_dict[key], emotion_threshold=50, race_threshold=50,\n",
" gender_threshold=50, age_threshold=50, \n",
" accept_disclosure=\"DISCLOSURE_AMMICO\").analyse_image()"
]
},
{

Π”Π²ΠΎΠΈΡ‡Π½Ρ‹Π΅ Π΄Π°Π½Π½Ρ‹Π΅
build/html/objects.inv

Π”Π²ΠΎΠΈΡ‡Π½Ρ‹ΠΉ Ρ„Π°ΠΉΠ» Π½Π΅ отобраТаСтся.

Различия Ρ„Π°ΠΉΠ»ΠΎΠ² скрыты, ΠΏΠΎΡ‚ΠΎΠΌΡƒ Ρ‡Ρ‚ΠΎ ΠΎΠ΄Π½Π° ΠΈΠ»ΠΈ нСсколько строк слишком Π΄Π»ΠΈΠ½Π½Ρ‹

ΠŸΡ€ΠΎΡΠΌΠΎΡ‚Ρ€Π΅Ρ‚ΡŒ Ρ„Π°ΠΉΠ»

@ -166,7 +166,7 @@
"source": [
"image_dict = ammico.find_files(\n",
" # path=\"/content/drive/MyDrive/misinformation-data/\",\n",
" path=data_path.as_posix(),\n",
" path=str(data_path),\n",
" limit=15,\n",
")"
]
@ -177,7 +177,30 @@
"source": [
"## Step 2: Inspect the input files using the graphical user interface\n",
"A Dash user interface is to select the most suitable options for the analysis, before running a complete analysis on the whole data set. The options for each detector module are explained below in the corresponding sections; for example, different models can be selected that will provide slightly different results. This way, the user can interactively explore which settings provide the most accurate results. In the interface, the nested `image_dict` is passed through the `AnalysisExplorer` class. The interface is run on a specific port which is passed using the `port` keyword; if a port is already in use, it will return an error message, in which case the user should select a different port number. \n",
"The interface opens a dash app inside the Jupyter Notebook and allows selection of the input file in the top left dropdown menu, as well as selection of the detector type in the top right, with options for each detector type as explained below. The output of the detector is shown directly on the right next to the image. This way, the user can directly inspect how updating the options for each detector changes the computed results, and find the best settings for a production run."
"The interface opens a dash app inside the Jupyter Notebook and allows selection of the input file in the top left dropdown menu, as well as selection of the detector type in the top right, with options for each detector type as explained below. The output of the detector is shown directly on the right next to the image. This way, the user can directly inspect how updating the options for each detector changes the computed results, and find the best settings for a production run.\n",
"\n",
"### Ethical disclosure statement\n",
"\n",
"If you want to run an analysis using the EmotionDetector detector type, you have first have to respond to an ethical disclosure statement. This disclosure statement ensures that you only use the full capabilities of the EmotionDetector after you have been made aware of its shortcomings.\n",
"\n",
"For this, answer \"yes\" or \"no\" to the below prompt. This will set an environment variable with the name given as in `accept_disclosure`. To re-run the disclosure prompt, unset the variable by uncommenting the line `os.environ.pop(accept_disclosure, None)`. To permanently set this envorinment variable, add it to your shell via your `.profile` or `.bashr` file.\n",
"\n",
"If the disclosure statement is accepted, the EmotionDetector will perform age, gender and race/ethnicity classification dependend on the provided thresholds. If the disclosure is rejected, only the presence of faces and emotion (if not wearing a mask) is detected."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# respond to the disclosure statement\n",
"# this will set an environment variable for you\n",
"# if you do not want to re-accept the disclosure every time, you can set this environment variable in your shell\n",
"# to re-set the environment variable, uncomment the below line\n",
"accept_disclosure = \"DISCLOSURE_AMMICO\"\n",
"# os.environ.pop(accept_disclosure, None)\n",
"_ = ammico.ethical_disclosure(accept_disclosure=accept_disclosure)"
]
},
{
@ -822,7 +845,7 @@
"metadata": {},
"source": [
"## Detection of faces and facial expression analysis\n",
"Faces and facial expressions are detected and analyzed using the `EmotionDetector` class from the `faces` module. Initially, it is detected if faces are present on the image using RetinaFace, followed by analysis if face masks are worn (Face-Mask-Detection). The detection of age, gender, race, and emotions is carried out with deepface.\n",
"Faces and facial expressions are detected and analyzed using the `EmotionDetector` class from the `faces` module. Initially, it is detected if faces are present on the image using RetinaFace, followed by analysis if face masks are worn (Face-Mask-Detection). The detection of age, gender, race, and emotions is carried out with deepface, but only if the disclosure statement has been accepted (see above).\n",
"\n",
"<img src=\"../_static/emotion_detector.png\" width=\"800\" />\n",
"\n",
@ -832,10 +855,11 @@
"\n",
"From the seven facial expressions, an overall dominating emotion category is identified: negative, positive, or neutral emotion. These are defined with the facial expressions angry, disgust, fear and sad for the negative category, happy for the positive category, and surprise and neutral for the neutral category.\n",
"\n",
"A similar threshold as for the emotion recognition is set for the race detection, `race_threshold`, with the default set to 50% so that a confidence for the race above 0.5 only will return a value in the analysis. \n",
"A similar threshold as for the emotion recognition is set for the race/ethnicity, gender and age detection, `race_threshold`, `gender_threshold`, `age_threshold`, with the default set to 50% so that a confidence for the race above 0.5 only will return a value in the analysis. \n",
"\n",
"Summarizing, the face detection is carried out using the following method call and keywords, where `emotion_threshold` and \n",
"`race_threshold` are optional:"
"You may also pass the name of the environment variable that determines if you accept or reject the ethical disclosure statement. By default, the variable is named `DISCLOSURE_AMMICO`.\n",
"\n",
"Summarizing, the face detection is carried out using the following method call and keywords, where `emotion_threshold`, `race_threshold`, `gender_threshold`, `age_threshold` are optional:"
]
},
{
@ -845,7 +869,9 @@
"outputs": [],
"source": [
"for key in image_dict.keys():\n",
" image_dict[key] = ammico.EmotionDetector(image_dict[key], emotion_threshold=50, race_threshold=50).analyse_image()"
" image_dict[key] = ammico.EmotionDetector(image_dict[key], emotion_threshold=50, race_threshold=50,\n",
" gender_threshold=50, age_threshold=50, \n",
" accept_disclosure=\"DISCLOSURE_AMMICO\").analyse_image()"
]
},
{