diff --git a/build/doctrees/ammico.doctree b/build/doctrees/ammico.doctree
index 029dc39..fd64ab6 100644
Binary files a/build/doctrees/ammico.doctree and b/build/doctrees/ammico.doctree differ
diff --git a/build/doctrees/environment.pickle b/build/doctrees/environment.pickle
index 62d6798..578d7ae 100644
Binary files a/build/doctrees/environment.pickle and b/build/doctrees/environment.pickle differ
diff --git a/build/doctrees/nbsphinx/notebooks/DemoNotebook_ammico.ipynb b/build/doctrees/nbsphinx/notebooks/DemoNotebook_ammico.ipynb
index 292a93d..f6a53da 100644
--- a/build/doctrees/nbsphinx/notebooks/DemoNotebook_ammico.ipynb
+++ b/build/doctrees/nbsphinx/notebooks/DemoNotebook_ammico.ipynb
@@ -166,7 +166,7 @@
"source": [
"image_dict = ammico.find_files(\n",
" # path=\"/content/drive/MyDrive/misinformation-data/\",\n",
- " path=data_path.as_posix(),\n",
+ " path=str(data_path),\n",
" limit=15,\n",
")"
]
@@ -177,7 +177,30 @@
"source": [
"## Step 2: Inspect the input files using the graphical user interface\n",
"A Dash user interface is to select the most suitable options for the analysis, before running a complete analysis on the whole data set. The options for each detector module are explained below in the corresponding sections; for example, different models can be selected that will provide slightly different results. This way, the user can interactively explore which settings provide the most accurate results. In the interface, the nested `image_dict` is passed through the `AnalysisExplorer` class. The interface is run on a specific port which is passed using the `port` keyword; if a port is already in use, it will return an error message, in which case the user should select a different port number. \n",
- "The interface opens a dash app inside the Jupyter Notebook and allows selection of the input file in the top left dropdown menu, as well as selection of the detector type in the top right, with options for each detector type as explained below. The output of the detector is shown directly on the right next to the image. This way, the user can directly inspect how updating the options for each detector changes the computed results, and find the best settings for a production run."
+ "The interface opens a dash app inside the Jupyter Notebook and allows selection of the input file in the top left dropdown menu, as well as selection of the detector type in the top right, with options for each detector type as explained below. The output of the detector is shown directly on the right next to the image. This way, the user can directly inspect how updating the options for each detector changes the computed results, and find the best settings for a production run.\n",
+ "\n",
+ "### Ethical disclosure statement\n",
+ "\n",
+ "If you want to run an analysis using the EmotionDetector detector type, you have first have to respond to an ethical disclosure statement. This disclosure statement ensures that you only use the full capabilities of the EmotionDetector after you have been made aware of its shortcomings.\n",
+ "\n",
+ "For this, answer \"yes\" or \"no\" to the below prompt. This will set an environment variable with the name given as in `accept_disclosure`. To re-run the disclosure prompt, unset the variable by uncommenting the line `os.environ.pop(accept_disclosure, None)`. To permanently set this envorinment variable, add it to your shell via your `.profile` or `.bashr` file.\n",
+ "\n",
+ "If the disclosure statement is accepted, the EmotionDetector will perform age, gender and race/ethnicity classification dependend on the provided thresholds. If the disclosure is rejected, only the presence of faces and emotion (if not wearing a mask) is detected."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# respond to the disclosure statement\n",
+ "# this will set an environment variable for you\n",
+ "# if you do not want to re-accept the disclosure every time, you can set this environment variable in your shell\n",
+ "# to re-set the environment variable, uncomment the below line\n",
+ "accept_disclosure = \"DISCLOSURE_AMMICO\"\n",
+ "# os.environ.pop(accept_disclosure, None)\n",
+ "_ = ammico.ethical_disclosure(accept_disclosure=accept_disclosure)"
]
},
{
@@ -822,7 +845,7 @@
"metadata": {},
"source": [
"## Detection of faces and facial expression analysis\n",
- "Faces and facial expressions are detected and analyzed using the `EmotionDetector` class from the `faces` module. Initially, it is detected if faces are present on the image using RetinaFace, followed by analysis if face masks are worn (Face-Mask-Detection). The detection of age, gender, race, and emotions is carried out with deepface.\n",
+ "Faces and facial expressions are detected and analyzed using the `EmotionDetector` class from the `faces` module. Initially, it is detected if faces are present on the image using RetinaFace, followed by analysis if face masks are worn (Face-Mask-Detection). The detection of age, gender, race, and emotions is carried out with deepface, but only if the disclosure statement has been accepted (see above).\n",
"\n",
"
\n",
"\n",
@@ -832,10 +855,11 @@
"\n",
"From the seven facial expressions, an overall dominating emotion category is identified: negative, positive, or neutral emotion. These are defined with the facial expressions angry, disgust, fear and sad for the negative category, happy for the positive category, and surprise and neutral for the neutral category.\n",
"\n",
- "A similar threshold as for the emotion recognition is set for the race detection, `race_threshold`, with the default set to 50% so that a confidence for the race above 0.5 only will return a value in the analysis. \n",
+ "A similar threshold as for the emotion recognition is set for the race/ethnicity, gender and age detection, `race_threshold`, `gender_threshold`, `age_threshold`, with the default set to 50% so that a confidence for the race above 0.5 only will return a value in the analysis. \n",
"\n",
- "Summarizing, the face detection is carried out using the following method call and keywords, where `emotion_threshold` and \n",
- "`race_threshold` are optional:"
+ "You may also pass the name of the environment variable that determines if you accept or reject the ethical disclosure statement. By default, the variable is named `DISCLOSURE_AMMICO`.\n",
+ "\n",
+ "Summarizing, the face detection is carried out using the following method call and keywords, where `emotion_threshold`, `race_threshold`, `gender_threshold`, `age_threshold` are optional:"
]
},
{
@@ -845,7 +869,9 @@
"outputs": [],
"source": [
"for key in image_dict.keys():\n",
- " image_dict[key] = ammico.EmotionDetector(image_dict[key], emotion_threshold=50, race_threshold=50).analyse_image()"
+ " image_dict[key] = ammico.EmotionDetector(image_dict[key], emotion_threshold=50, race_threshold=50,\n",
+ " gender_threshold=50, age_threshold=50, \n",
+ " accept_disclosure=\"DISCLOSURE_AMMICO\").analyse_image()"
]
},
{
diff --git a/build/doctrees/notebooks/DemoNotebook_ammico.doctree b/build/doctrees/notebooks/DemoNotebook_ammico.doctree
index 20ea498..46d16f2 100644
Binary files a/build/doctrees/notebooks/DemoNotebook_ammico.doctree and b/build/doctrees/notebooks/DemoNotebook_ammico.doctree differ
diff --git a/build/html/_sources/notebooks/DemoNotebook_ammico.ipynb.txt b/build/html/_sources/notebooks/DemoNotebook_ammico.ipynb.txt
index 292a93d..f6a53da 100644
--- a/build/html/_sources/notebooks/DemoNotebook_ammico.ipynb.txt
+++ b/build/html/_sources/notebooks/DemoNotebook_ammico.ipynb.txt
@@ -166,7 +166,7 @@
"source": [
"image_dict = ammico.find_files(\n",
" # path=\"/content/drive/MyDrive/misinformation-data/\",\n",
- " path=data_path.as_posix(),\n",
+ " path=str(data_path),\n",
" limit=15,\n",
")"
]
@@ -177,7 +177,30 @@
"source": [
"## Step 2: Inspect the input files using the graphical user interface\n",
"A Dash user interface is to select the most suitable options for the analysis, before running a complete analysis on the whole data set. The options for each detector module are explained below in the corresponding sections; for example, different models can be selected that will provide slightly different results. This way, the user can interactively explore which settings provide the most accurate results. In the interface, the nested `image_dict` is passed through the `AnalysisExplorer` class. The interface is run on a specific port which is passed using the `port` keyword; if a port is already in use, it will return an error message, in which case the user should select a different port number. \n",
- "The interface opens a dash app inside the Jupyter Notebook and allows selection of the input file in the top left dropdown menu, as well as selection of the detector type in the top right, with options for each detector type as explained below. The output of the detector is shown directly on the right next to the image. This way, the user can directly inspect how updating the options for each detector changes the computed results, and find the best settings for a production run."
+ "The interface opens a dash app inside the Jupyter Notebook and allows selection of the input file in the top left dropdown menu, as well as selection of the detector type in the top right, with options for each detector type as explained below. The output of the detector is shown directly on the right next to the image. This way, the user can directly inspect how updating the options for each detector changes the computed results, and find the best settings for a production run.\n",
+ "\n",
+ "### Ethical disclosure statement\n",
+ "\n",
+ "If you want to run an analysis using the EmotionDetector detector type, you have first have to respond to an ethical disclosure statement. This disclosure statement ensures that you only use the full capabilities of the EmotionDetector after you have been made aware of its shortcomings.\n",
+ "\n",
+ "For this, answer \"yes\" or \"no\" to the below prompt. This will set an environment variable with the name given as in `accept_disclosure`. To re-run the disclosure prompt, unset the variable by uncommenting the line `os.environ.pop(accept_disclosure, None)`. To permanently set this envorinment variable, add it to your shell via your `.profile` or `.bashr` file.\n",
+ "\n",
+ "If the disclosure statement is accepted, the EmotionDetector will perform age, gender and race/ethnicity classification dependend on the provided thresholds. If the disclosure is rejected, only the presence of faces and emotion (if not wearing a mask) is detected."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# respond to the disclosure statement\n",
+ "# this will set an environment variable for you\n",
+ "# if you do not want to re-accept the disclosure every time, you can set this environment variable in your shell\n",
+ "# to re-set the environment variable, uncomment the below line\n",
+ "accept_disclosure = \"DISCLOSURE_AMMICO\"\n",
+ "# os.environ.pop(accept_disclosure, None)\n",
+ "_ = ammico.ethical_disclosure(accept_disclosure=accept_disclosure)"
]
},
{
@@ -822,7 +845,7 @@
"metadata": {},
"source": [
"## Detection of faces and facial expression analysis\n",
- "Faces and facial expressions are detected and analyzed using the `EmotionDetector` class from the `faces` module. Initially, it is detected if faces are present on the image using RetinaFace, followed by analysis if face masks are worn (Face-Mask-Detection). The detection of age, gender, race, and emotions is carried out with deepface.\n",
+ "Faces and facial expressions are detected and analyzed using the `EmotionDetector` class from the `faces` module. Initially, it is detected if faces are present on the image using RetinaFace, followed by analysis if face masks are worn (Face-Mask-Detection). The detection of age, gender, race, and emotions is carried out with deepface, but only if the disclosure statement has been accepted (see above).\n",
"\n",
"
\n",
"\n",
@@ -832,10 +855,11 @@
"\n",
"From the seven facial expressions, an overall dominating emotion category is identified: negative, positive, or neutral emotion. These are defined with the facial expressions angry, disgust, fear and sad for the negative category, happy for the positive category, and surprise and neutral for the neutral category.\n",
"\n",
- "A similar threshold as for the emotion recognition is set for the race detection, `race_threshold`, with the default set to 50% so that a confidence for the race above 0.5 only will return a value in the analysis. \n",
+ "A similar threshold as for the emotion recognition is set for the race/ethnicity, gender and age detection, `race_threshold`, `gender_threshold`, `age_threshold`, with the default set to 50% so that a confidence for the race above 0.5 only will return a value in the analysis. \n",
"\n",
- "Summarizing, the face detection is carried out using the following method call and keywords, where `emotion_threshold` and \n",
- "`race_threshold` are optional:"
+ "You may also pass the name of the environment variable that determines if you accept or reject the ethical disclosure statement. By default, the variable is named `DISCLOSURE_AMMICO`.\n",
+ "\n",
+ "Summarizing, the face detection is carried out using the following method call and keywords, where `emotion_threshold`, `race_threshold`, `gender_threshold`, `age_threshold` are optional:"
]
},
{
@@ -845,7 +869,9 @@
"outputs": [],
"source": [
"for key in image_dict.keys():\n",
- " image_dict[key] = ammico.EmotionDetector(image_dict[key], emotion_threshold=50, race_threshold=50).analyse_image()"
+ " image_dict[key] = ammico.EmotionDetector(image_dict[key], emotion_threshold=50, race_threshold=50,\n",
+ " gender_threshold=50, age_threshold=50, \n",
+ " accept_disclosure=\"DISCLOSURE_AMMICO\").analyse_image()"
]
},
{
diff --git a/build/html/ammico.html b/build/html/ammico.html
index af6f9aa..a9d88fa 100644
--- a/build/html/ammico.html
+++ b/build/html/ammico.html
@@ -153,6 +153,7 @@
deepface_symlink_processor()ethical_disclosure()Bases: AnalysisMethod
Asks the user to accept the ethical disclosure.
+accept_disclosure (str) – The name of the disclosure variable (default: “DISCLOSURE_AMMICO”).
+
deepface_symlink_processor()ethical_disclosure()
image_dict = ammico.find_files(
# path="/content/drive/MyDrive/misinformation-data/",
- path=data_path.as_posix(),
+ path=str(data_path),
limit=15,
)
A Dash user interface is to select the most suitable options for the analysis, before running a complete analysis on the whole data set. The options for each detector module are explained below in the corresponding sections; for example, different models can be selected that will provide slightly different results. This way, the user can interactively explore which settings provide the most accurate results. In the interface, the nested Ethical disclosure statement+If you want to run an analysis using the EmotionDetector detector type, you have first have to respond to an ethical disclosure statement. This disclosure statement ensures that you only use the full capabilities of the EmotionDetector after you have been made aware of its shortcomings. +For this, answer “yes” or “no” to the below prompt. This will set an environment variable with the name given as in If the disclosure statement is accepted, the EmotionDetector will perform age, gender and race/ethnicity classification dependend on the provided thresholds. If the disclosure is rejected, only the presence of faces and emotion (if not wearing a mask) is detected. +
+
[ ]:
+# respond to the disclosure statement
+# this will set an environment variable for you
+# if you do not want to re-accept the disclosure every time, you can set this environment variable in your shell
+# to re-set the environment variable, uncomment the below line
+accept_disclosure = "DISCLOSURE_AMMICO"
+# os.environ.pop(accept_disclosure, None)
+_ = ammico.ethical_disclosure(accept_disclosure=accept_disclosure)
+[ ]:
Step 3: Analyze all imagesThe analysis can be run in production on all images in the data set. Depending on the size of the data set and the computing resources available, this can take some time. @@ -470,7 +493,7 @@ directly on the right next to the image. This way, the user can directly inspectThe detector modulesThe different detector modules with their options are explained in more detail in this section. ## Text detector Text on the images can be extracted using the
The user can set if the text should be further summarized, and analyzed for sentiment and named entity recognition, by setting the keyword Please note that for the Google Cloud Vision API (the TextDetector class) you need to set a key in order to process the images. This key is ideally set as an environment variable using for example @@ -552,7 +575,7 @@ directly on the right next to the image. This way, the user can directly inspectImage summary and queryThe
This module is based on the LAVIS library. Since the models can be quite large, an initial object is created which will load the necessary models into RAM/VRAM and then use them in the analysis. The user can specify the type of analysis to be performed using the The implemented models are listed below. @@ -804,22 +827,25 @@ directly on the right next to the image. This way, the user can directly inspectDetection of faces and facial expression analysis-Faces and facial expressions are detected and analyzed using the
Faces and facial expressions are detected and analyzed using the
Depending on the features found on the image, the face detection module returns a different analysis content: If no faces are found on the image, all further steps are skipped and the result The emotion detection reports the seven facial expressions angry, fear, neutral, sad, disgust, happy and surprise. These emotions are assigned based on the returned confidence of the model (between 0 and 1), with a high confidence signifying a high likelihood of the detected emotion being correct. Emotion recognition is not an easy task, even for a human; therefore, we have added a keyword From the seven facial expressions, an overall dominating emotion category is identified: negative, positive, or neutral emotion. These are defined with the facial expressions angry, disgust, fear and sad for the negative category, happy for the positive category, and surprise and neutral for the neutral category. -A similar threshold as for the emotion recognition is set for the race detection, Summarizing, the face detection is carried out using the following method call and keywords, where A similar threshold as for the emotion recognition is set for the race/ethnicity, gender and age detection, You may also pass the name of the environment variable that determines if you accept or reject the ethical disclosure statement. By default, the variable is named Summarizing, the face detection is carried out using the following method call and keywords, where [ ]:
for key in image_dict.keys():
- image_dict[key] = ammico.EmotionDetector(image_dict[key], emotion_threshold=50, race_threshold=50).analyse_image()
+ image_dict[key] = ammico.EmotionDetector(image_dict[key], emotion_threshold=50, race_threshold=50,
+ gender_threshold=50, age_threshold=50,
+ accept_disclosure="DISCLOSURE_AMMICO").analyse_image()
\n",
"\n",
@@ -832,10 +855,11 @@
"\n",
"From the seven facial expressions, an overall dominating emotion category is identified: negative, positive, or neutral emotion. These are defined with the facial expressions angry, disgust, fear and sad for the negative category, happy for the positive category, and surprise and neutral for the neutral category.\n",
"\n",
- "A similar threshold as for the emotion recognition is set for the race detection, `race_threshold`, with the default set to 50% so that a confidence for the race above 0.5 only will return a value in the analysis. \n",
+ "A similar threshold as for the emotion recognition is set for the race/ethnicity, gender and age detection, `race_threshold`, `gender_threshold`, `age_threshold`, with the default set to 50% so that a confidence for the race above 0.5 only will return a value in the analysis. \n",
"\n",
- "Summarizing, the face detection is carried out using the following method call and keywords, where `emotion_threshold` and \n",
- "`race_threshold` are optional:"
+ "You may also pass the name of the environment variable that determines if you accept or reject the ethical disclosure statement. By default, the variable is named `DISCLOSURE_AMMICO`.\n",
+ "\n",
+ "Summarizing, the face detection is carried out using the following method call and keywords, where `emotion_threshold`, `race_threshold`, `gender_threshold`, `age_threshold` are optional:"
]
},
{
@@ -845,7 +869,9 @@
"outputs": [],
"source": [
"for key in image_dict.keys():\n",
- " image_dict[key] = ammico.EmotionDetector(image_dict[key], emotion_threshold=50, race_threshold=50).analyse_image()"
+ " image_dict[key] = ammico.EmotionDetector(image_dict[key], emotion_threshold=50, race_threshold=50,\n",
+ " gender_threshold=50, age_threshold=50, \n",
+ " accept_disclosure=\"DISCLOSURE_AMMICO\").analyse_image()"
]
},
{
diff --git a/build/html/objects.inv b/build/html/objects.inv
index e5b7eaa..37ebcdd 100644
Binary files a/build/html/objects.inv and b/build/html/objects.inv differ
diff --git a/build/html/searchindex.js b/build/html/searchindex.js
index 39e7acb..4f3508b 100644
--- a/build/html/searchindex.js
+++ b/build/html/searchindex.js
@@ -1 +1 @@
-Search.setIndex({"alltitles": {"1. First, install tensorflow (https://www.tensorflow.org/install/pip)": [[7, "first-install-tensorflow-https-www-tensorflow-org-install-pip"]], "2. Second, install pytorch": [[7, "second-install-pytorch"]], "3. After we prepared right environment we can install the ammico package": [[7, "after-we-prepared-right-environment-we-can-install-the-ammico-package"]], "AMMICO - AI Media and Misinformation Content Analysis Tool": [[7, "ammico-ai-media-and-misinformation-content-analysis-tool"]], "AMMICO Demonstration Notebook": [[5, "AMMICO-Demonstration-Notebook"]], "AMMICO package modules": [[4, "ammico-package-modules"]], "BLIP2 models": [[5, "BLIP2-models"]], "Color analysis of pictures": [[5, "Color-analysis-of-pictures"]], "Color/hue detection": [[7, "color-hue-detection"]], "Compatibility problems solving": [[7, "compatibility-problems-solving"]], "Content extraction": [[7, "content-extraction"]], "Contents:": [[2, null]], "Crop posts module": [[6, "Crop-posts-module"]], "Cropping of posts": [[7, "cropping-of-posts"]], "Detection of faces and facial expression analysis": [[5, "Detection-of-faces-and-facial-expression-analysis"]], "Emotion recognition": [[7, "emotion-recognition"]], "FAQ": [[7, "faq"]], "Features": [[7, "features"]], "Formulate your search queries": [[5, "Formulate-your-search-queries"]], "Further detector modules": [[5, "Further-detector-modules"]], "Image Multimodal Search": [[5, "Image-Multimodal-Search"]], "Image summary and query": [[5, "Image-summary-and-query"]], "Import the ammico package.": [[5, "Import-the-ammico-package."]], "Improve the search results": [[5, "Improve-the-search-results"]], "Indexing and extracting features from images in selected folder": [[5, "Indexing-and-extracting-features-from-images-in-selected-folder"]], "Indices and tables": [[2, "indices-and-tables"]], "Installation": [[7, "installation"]], "Instructions how to generate and enable a google Cloud Vision API key": [[1, "instructions-how-to-generate-and-enable-a-google-cloud-vision-api-key"], [8, "instructions-how-to-generate-and-enable-a-google-cloud-vision-api-key"]], "License": [[3, "license"]], "Micromamba": [[7, "micromamba"]], "Read in a csv file containing text and translating/analysing the text": [[5, "Read-in-a-csv-file-containing-text-and-translating/analysing-the-text"]], "Save search results to csv": [[5, "Save-search-results-to-csv"]], "Step 0: Create and set a Google Cloud Vision Key": [[5, "Step-0:-Create-and-set-a-Google-Cloud-Vision-Key"]], "Step 1: Read your data into AMMICO": [[5, "Step-1:-Read-your-data-into-AMMICO"]], "Step 2: Inspect the input files using the graphical user interface": [[5, "Step-2:-Inspect-the-input-files-using-the-graphical-user-interface"]], "Step 3: Analyze all images": [[5, "Step-3:-Analyze-all-images"]], "Step 4: Convert analysis output to pandas dataframe and write csv": [[5, "Step-4:-Convert-analysis-output-to-pandas-dataframe-and-write-csv"]], "Text extraction": [[7, "text-extraction"]], "The detector modules": [[5, "The-detector-modules"]], "Usage": [[7, "usage"]], "Use a test dataset": [[5, "Use-a-test-dataset"]], "Welcome to AMMICO\u2019s documentation!": [[2, "welcome-to-ammico-s-documentation"]], "What happens if I don\u2019t have internet access - can I still use ammico?": [[7, "what-happens-if-i-don-t-have-internet-access-can-i-still-use-ammico"]], "What happens to the images that are sent to google Cloud Vision?": [[7, "what-happens-to-the-images-that-are-sent-to-google-cloud-vision"]], "What happens to the text that is sent to google Translate?": [[7, "what-happens-to-the-text-that-is-sent-to-google-translate"]], "Windows": [[7, "windows"]], "color_analysis module": [[0, "module-colors"]], "cropposts module": [[0, "module-cropposts"]], "display module": [[0, "module-display"]], "faces module": [[0, "module-faces"]], "multimodal search module": [[0, "module-multimodal_search"]], "summary module": [[0, "module-summary"]], "text module": [[0, "module-text"]], "utils module": [[0, "module-utils"]]}, "docnames": ["ammico", "create_API_key_link", "index", "license_link", "modules", "notebooks/DemoNotebook_ammico", "notebooks/Example cropposts", "readme_link", "set_up_credentials"], "envversion": {"nbsphinx": 4, "sphinx": 61, "sphinx.domains.c": 3, "sphinx.domains.changeset": 1, "sphinx.domains.citation": 1, "sphinx.domains.cpp": 9, "sphinx.domains.index": 1, "sphinx.domains.javascript": 3, "sphinx.domains.math": 2, "sphinx.domains.python": 4, "sphinx.domains.rst": 2, "sphinx.domains.std": 2}, "filenames": ["ammico.rst", "create_API_key_link.md", "index.rst", "license_link.md", "modules.rst", "notebooks/DemoNotebook_ammico.ipynb", "notebooks/Example cropposts.ipynb", "readme_link.md", "set_up_credentials.md"], "indexentries": {"all_allowed_model_types (summary.summarydetector attribute)": [[0, "summary.SummaryDetector.all_allowed_model_types", false]], "allowed_analysis_types (summary.summarydetector attribute)": [[0, "summary.SummaryDetector.allowed_analysis_types", false]], "allowed_model_types (summary.summarydetector attribute)": [[0, "summary.SummaryDetector.allowed_model_types", false]], "allowed_new_model_types (summary.summarydetector attribute)": [[0, "summary.SummaryDetector.allowed_new_model_types", false]], "ammico_prefetch_models() (in module utils)": [[0, "utils.ammico_prefetch_models", false]], "analyse_image() (colors.colordetector method)": [[0, "colors.ColorDetector.analyse_image", false]], "analyse_image() (faces.emotiondetector method)": [[0, "faces.EmotionDetector.analyse_image", false]], "analyse_image() (summary.summarydetector method)": [[0, "summary.SummaryDetector.analyse_image", false]], "analyse_image() (text.textdetector method)": [[0, "text.TextDetector.analyse_image", false]], "analyse_image() (utils.analysismethod method)": [[0, "utils.AnalysisMethod.analyse_image", false]], "analyse_questions() (summary.summarydetector method)": [[0, "summary.SummaryDetector.analyse_questions", false]], "analyse_summary() (summary.summarydetector method)": [[0, "summary.SummaryDetector.analyse_summary", false]], "analyse_topic() (text.postprocesstext method)": [[0, "text.PostprocessText.analyse_topic", false]], "analysisexplorer (class in display)": [[0, "display.AnalysisExplorer", false]], "analysismethod (class in utils)": [[0, "utils.AnalysisMethod", false]], "analyze_single_face() (faces.emotiondetector method)": [[0, "faces.EmotionDetector.analyze_single_face", false]], "append_data_to_dict() (in module utils)": [[0, "utils.append_data_to_dict", false]], "check_for_missing_keys() (in module utils)": [[0, "utils.check_for_missing_keys", false]], "check_model() (summary.summarydetector method)": [[0, "summary.SummaryDetector.check_model", false]], "clean_subdict() (faces.emotiondetector method)": [[0, "faces.EmotionDetector.clean_subdict", false]], "clean_text() (text.textdetector method)": [[0, "text.TextDetector.clean_text", false]], "colordetector (class in colors)": [[0, "colors.ColorDetector", false]], "colors": [[0, "module-colors", false]], "compute_crop_corner() (in module cropposts)": [[0, "cropposts.compute_crop_corner", false]], "compute_gradcam_batch() (multimodal_search.multimodalsearch method)": [[0, "multimodal_search.MultimodalSearch.compute_gradcam_batch", false]], "crop_image_from_post() (in module cropposts)": [[0, "cropposts.crop_image_from_post", false]], "crop_media_posts() (in module cropposts)": [[0, "cropposts.crop_media_posts", false]], "crop_posts_from_refs() (in module cropposts)": [[0, "cropposts.crop_posts_from_refs", false]], "crop_posts_image() (in module cropposts)": [[0, "cropposts.crop_posts_image", false]], "cropposts": [[0, "module-cropposts", false]], "deepface_symlink_processor() (in module faces)": [[0, "faces.deepface_symlink_processor", false]], "display": [[0, "module-display", false]], "downloadresource (class in utils)": [[0, "utils.DownloadResource", false]], "draw_matches() (in module cropposts)": [[0, "cropposts.draw_matches", false]], "dump_df() (in module utils)": [[0, "utils.dump_df", false]], "emotiondetector (class in faces)": [[0, "faces.EmotionDetector", false]], "extract_image_features_basic() (multimodal_search.multimodalsearch method)": [[0, "multimodal_search.MultimodalSearch.extract_image_features_basic", false]], "extract_image_features_blip2() (multimodal_search.multimodalsearch method)": [[0, "multimodal_search.MultimodalSearch.extract_image_features_blip2", false]], "extract_image_features_clip() (multimodal_search.multimodalsearch method)": [[0, "multimodal_search.MultimodalSearch.extract_image_features_clip", false]], "extract_text_features() (multimodal_search.multimodalsearch method)": [[0, "multimodal_search.MultimodalSearch.extract_text_features", false]], "faces": [[0, "module-faces", false]], "facial_expression_analysis() (faces.emotiondetector method)": [[0, "faces.EmotionDetector.facial_expression_analysis", false]], "find_files() (in module utils)": [[0, "utils.find_files", false]], "get() (utils.downloadresource method)": [[0, "utils.DownloadResource.get", false]], "get_att_map() (multimodal_search.multimodalsearch method)": [[0, "multimodal_search.MultimodalSearch.get_att_map", false]], "get_color_table() (in module utils)": [[0, "utils.get_color_table", false]], "get_dataframe() (in module utils)": [[0, "utils.get_dataframe", false]], "get_pathes_from_query() (multimodal_search.multimodalsearch method)": [[0, "multimodal_search.MultimodalSearch.get_pathes_from_query", false]], "get_text_df() (text.postprocesstext method)": [[0, "text.PostprocessText.get_text_df", false]], "get_text_dict() (text.postprocesstext method)": [[0, "text.PostprocessText.get_text_dict", false]], "get_text_from_image() (text.textdetector method)": [[0, "text.TextDetector.get_text_from_image", false]], "image_text_match_reordering() (multimodal_search.multimodalsearch method)": [[0, "multimodal_search.MultimodalSearch.image_text_match_reordering", false]], "initialize_dict() (in module utils)": [[0, "utils.initialize_dict", false]], "is_interactive() (in module utils)": [[0, "utils.is_interactive", false]], "iterable() (in module utils)": [[0, "utils.iterable", false]], "itm_text_precessing() (multimodal_search.multimodalsearch method)": [[0, "multimodal_search.MultimodalSearch.itm_text_precessing", false]], "kp_from_matches() (in module cropposts)": [[0, "cropposts.kp_from_matches", false]], "load_feature_extractor_model_albef() (multimodal_search.multimodalsearch method)": [[0, "multimodal_search.MultimodalSearch.load_feature_extractor_model_albef", false]], "load_feature_extractor_model_blip() (multimodal_search.multimodalsearch method)": [[0, "multimodal_search.MultimodalSearch.load_feature_extractor_model_blip", false]], "load_feature_extractor_model_blip2() (multimodal_search.multimodalsearch method)": [[0, "multimodal_search.MultimodalSearch.load_feature_extractor_model_blip2", false]], "load_feature_extractor_model_clip_base() (multimodal_search.multimodalsearch method)": [[0, "multimodal_search.MultimodalSearch.load_feature_extractor_model_clip_base", false]], "load_feature_extractor_model_clip_vitl14() (multimodal_search.multimodalsearch method)": [[0, "multimodal_search.MultimodalSearch.load_feature_extractor_model_clip_vitl14", false]], "load_feature_extractor_model_clip_vitl14_336() (multimodal_search.multimodalsearch method)": [[0, "multimodal_search.MultimodalSearch.load_feature_extractor_model_clip_vitl14_336", false]], "load_model() (summary.summarydetector method)": [[0, "summary.SummaryDetector.load_model", false]], "load_model_base() (summary.summarydetector method)": [[0, "summary.SummaryDetector.load_model_base", false]], "load_model_base_blip2_opt_caption_coco_opt67b() (summary.summarydetector method)": [[0, "summary.SummaryDetector.load_model_base_blip2_opt_caption_coco_opt67b", false]], "load_model_base_blip2_opt_pretrain_opt67b() (summary.summarydetector method)": [[0, "summary.SummaryDetector.load_model_base_blip2_opt_pretrain_opt67b", false]], "load_model_blip2_opt_caption_coco_opt27b() (summary.summarydetector method)": [[0, "summary.SummaryDetector.load_model_blip2_opt_caption_coco_opt27b", false]], "load_model_blip2_opt_pretrain_opt27b() (summary.summarydetector method)": [[0, "summary.SummaryDetector.load_model_blip2_opt_pretrain_opt27b", false]], "load_model_blip2_t5_caption_coco_flant5xl() (summary.summarydetector method)": [[0, "summary.SummaryDetector.load_model_blip2_t5_caption_coco_flant5xl", false]], "load_model_blip2_t5_pretrain_flant5xl() (summary.summarydetector method)": [[0, "summary.SummaryDetector.load_model_blip2_t5_pretrain_flant5xl", false]], "load_model_blip2_t5_pretrain_flant5xxl() (summary.summarydetector method)": [[0, "summary.SummaryDetector.load_model_blip2_t5_pretrain_flant5xxl", false]], "load_model_large() (summary.summarydetector method)": [[0, "summary.SummaryDetector.load_model_large", false]], "load_new_model() (summary.summarydetector method)": [[0, "summary.SummaryDetector.load_new_model", false]], "load_tensors() (multimodal_search.multimodalsearch method)": [[0, "multimodal_search.MultimodalSearch.load_tensors", false]], "load_vqa_model() (summary.summarydetector method)": [[0, "summary.SummaryDetector.load_vqa_model", false]], "matching_points() (in module cropposts)": [[0, "cropposts.matching_points", false]], "module": [[0, "module-colors", false], [0, "module-cropposts", false], [0, "module-display", false], [0, "module-faces", false], [0, "module-multimodal_search", false], [0, "module-summary", false], [0, "module-text", false], [0, "module-utils", false]], "multimodal_device (multimodal_search.multimodalsearch attribute)": [[0, "multimodal_search.MultimodalSearch.multimodal_device", false]], "multimodal_search": [[0, "module-multimodal_search", false]], "multimodal_search() (multimodal_search.multimodalsearch method)": [[0, "multimodal_search.MultimodalSearch.multimodal_search", false]], "multimodalsearch (class in multimodal_search)": [[0, "multimodal_search.MultimodalSearch", false]], "parsing_images() (multimodal_search.multimodalsearch method)": [[0, "multimodal_search.MultimodalSearch.parsing_images", false]], "paste_image_and_comment() (in module cropposts)": [[0, "cropposts.paste_image_and_comment", false]], "postprocesstext (class in text)": [[0, "text.PostprocessText", false]], "querys_processing() (multimodal_search.multimodalsearch method)": [[0, "multimodal_search.MultimodalSearch.querys_processing", false]], "read_and_process_images() (multimodal_search.multimodalsearch method)": [[0, "multimodal_search.MultimodalSearch.read_and_process_images", false]], "read_and_process_images_itm() (multimodal_search.multimodalsearch method)": [[0, "multimodal_search.MultimodalSearch.read_and_process_images_itm", false]], "read_csv() (text.textanalyzer method)": [[0, "text.TextAnalyzer.read_csv", false]], "read_img() (multimodal_search.multimodalsearch method)": [[0, "multimodal_search.MultimodalSearch.read_img", false]], "remove_linebreaks() (text.textdetector method)": [[0, "text.TextDetector.remove_linebreaks", false]], "resize_img() (multimodal_search.multimodalsearch method)": [[0, "multimodal_search.MultimodalSearch.resize_img", false]], "resources (utils.downloadresource attribute)": [[0, "utils.DownloadResource.resources", false]], "rgb2name() (colors.colordetector method)": [[0, "colors.ColorDetector.rgb2name", false]], "run_server() (display.analysisexplorer method)": [[0, "display.AnalysisExplorer.run_server", false]], "save_tensors() (multimodal_search.multimodalsearch method)": [[0, "multimodal_search.MultimodalSearch.save_tensors", false]], "set_keys() (colors.colordetector method)": [[0, "colors.ColorDetector.set_keys", false]], "set_keys() (faces.emotiondetector method)": [[0, "faces.EmotionDetector.set_keys", false]], "set_keys() (text.textdetector method)": [[0, "text.TextDetector.set_keys", false]], "set_keys() (utils.analysismethod method)": [[0, "utils.AnalysisMethod.set_keys", false]], "show_results() (multimodal_search.multimodalsearch method)": [[0, "multimodal_search.MultimodalSearch.show_results", false]], "summary": [[0, "module-summary", false]], "summarydetector (class in summary)": [[0, "summary.SummaryDetector", false]], "text": [[0, "module-text", false]], "text_ner() (text.textdetector method)": [[0, "text.TextDetector.text_ner", false]], "text_sentiment_transformers() (text.textdetector method)": [[0, "text.TextDetector.text_sentiment_transformers", false]], "text_summary() (text.textdetector method)": [[0, "text.TextDetector.text_summary", false]], "textanalyzer (class in text)": [[0, "text.TextAnalyzer", false]], "textdetector (class in text)": [[0, "text.TextDetector", false]], "translate_text() (text.textdetector method)": [[0, "text.TextDetector.translate_text", false]], "update_picture() (display.analysisexplorer method)": [[0, "display.AnalysisExplorer.update_picture", false]], "upload_model_blip2_coco() (multimodal_search.multimodalsearch method)": [[0, "multimodal_search.MultimodalSearch.upload_model_blip2_coco", false]], "upload_model_blip_base() (multimodal_search.multimodalsearch method)": [[0, "multimodal_search.MultimodalSearch.upload_model_blip_base", false]], "upload_model_blip_large() (multimodal_search.multimodalsearch method)": [[0, "multimodal_search.MultimodalSearch.upload_model_blip_large", false]], "utils": [[0, "module-utils", false]], "wears_mask() (faces.emotiondetector method)": [[0, "faces.EmotionDetector.wears_mask", false]]}, "objects": {"": [[0, 0, 0, "-", "colors"], [0, 0, 0, "-", "cropposts"], [0, 0, 0, "-", "display"], [0, 0, 0, "-", "faces"], [0, 0, 0, "-", "multimodal_search"], [0, 0, 0, "-", "summary"], [0, 0, 0, "-", "text"], [0, 0, 0, "-", "utils"]], "colors": [[0, 1, 1, "", "ColorDetector"]], "colors.ColorDetector": [[0, 2, 1, "", "analyse_image"], [0, 2, 1, "", "rgb2name"], [0, 2, 1, "", "set_keys"]], "cropposts": [[0, 3, 1, "", "compute_crop_corner"], [0, 3, 1, "", "crop_image_from_post"], [0, 3, 1, "", "crop_media_posts"], [0, 3, 1, "", "crop_posts_from_refs"], [0, 3, 1, "", "crop_posts_image"], [0, 3, 1, "", "draw_matches"], [0, 3, 1, "", "kp_from_matches"], [0, 3, 1, "", "matching_points"], [0, 3, 1, "", "paste_image_and_comment"]], "display": [[0, 1, 1, "", "AnalysisExplorer"]], "display.AnalysisExplorer": [[0, 2, 1, "", "run_server"], [0, 2, 1, "", "update_picture"]], "faces": [[0, 1, 1, "", "EmotionDetector"], [0, 3, 1, "", "deepface_symlink_processor"]], "faces.EmotionDetector": [[0, 2, 1, "", "analyse_image"], [0, 2, 1, "", "analyze_single_face"], [0, 2, 1, "", "clean_subdict"], [0, 2, 1, "", "facial_expression_analysis"], [0, 2, 1, "", "set_keys"], [0, 2, 1, "", "wears_mask"]], "multimodal_search": [[0, 1, 1, "", "MultimodalSearch"]], "multimodal_search.MultimodalSearch": [[0, 2, 1, "", "compute_gradcam_batch"], [0, 2, 1, "", "extract_image_features_basic"], [0, 2, 1, "", "extract_image_features_blip2"], [0, 2, 1, "", "extract_image_features_clip"], [0, 2, 1, "", "extract_text_features"], [0, 2, 1, "", "get_att_map"], [0, 2, 1, "", "get_pathes_from_query"], [0, 2, 1, "", "image_text_match_reordering"], [0, 2, 1, "", "itm_text_precessing"], [0, 2, 1, "", "load_feature_extractor_model_albef"], [0, 2, 1, "", "load_feature_extractor_model_blip"], [0, 2, 1, "", "load_feature_extractor_model_blip2"], [0, 2, 1, "", "load_feature_extractor_model_clip_base"], [0, 2, 1, "", "load_feature_extractor_model_clip_vitl14"], [0, 2, 1, "", "load_feature_extractor_model_clip_vitl14_336"], [0, 2, 1, "", "load_tensors"], [0, 4, 1, "", "multimodal_device"], [0, 2, 1, "", "multimodal_search"], [0, 2, 1, "", "parsing_images"], [0, 2, 1, "", "querys_processing"], [0, 2, 1, "", "read_and_process_images"], [0, 2, 1, "", "read_and_process_images_itm"], [0, 2, 1, "", "read_img"], [0, 2, 1, "", "resize_img"], [0, 2, 1, "", "save_tensors"], [0, 2, 1, "", "show_results"], [0, 2, 1, "", "upload_model_blip2_coco"], [0, 2, 1, "", "upload_model_blip_base"], [0, 2, 1, "", "upload_model_blip_large"]], "summary": [[0, 1, 1, "", "SummaryDetector"]], "summary.SummaryDetector": [[0, 4, 1, "", "all_allowed_model_types"], [0, 4, 1, "", "allowed_analysis_types"], [0, 4, 1, "", "allowed_model_types"], [0, 4, 1, "", "allowed_new_model_types"], [0, 2, 1, "", "analyse_image"], [0, 2, 1, "", "analyse_questions"], [0, 2, 1, "", "analyse_summary"], [0, 2, 1, "", "check_model"], [0, 2, 1, "", "load_model"], [0, 2, 1, "", "load_model_base"], [0, 2, 1, "", "load_model_base_blip2_opt_caption_coco_opt67b"], [0, 2, 1, "", "load_model_base_blip2_opt_pretrain_opt67b"], [0, 2, 1, "", "load_model_blip2_opt_caption_coco_opt27b"], [0, 2, 1, "", "load_model_blip2_opt_pretrain_opt27b"], [0, 2, 1, "", "load_model_blip2_t5_caption_coco_flant5xl"], [0, 2, 1, "", "load_model_blip2_t5_pretrain_flant5xl"], [0, 2, 1, "", "load_model_blip2_t5_pretrain_flant5xxl"], [0, 2, 1, "", "load_model_large"], [0, 2, 1, "", "load_new_model"], [0, 2, 1, "", "load_vqa_model"]], "text": [[0, 1, 1, "", "PostprocessText"], [0, 1, 1, "", "TextAnalyzer"], [0, 1, 1, "", "TextDetector"]], "text.PostprocessText": [[0, 2, 1, "", "analyse_topic"], [0, 2, 1, "", "get_text_df"], [0, 2, 1, "", "get_text_dict"]], "text.TextAnalyzer": [[0, 2, 1, "", "read_csv"]], "text.TextDetector": [[0, 2, 1, "", "analyse_image"], [0, 2, 1, "", "clean_text"], [0, 2, 1, "", "get_text_from_image"], [0, 2, 1, "", "remove_linebreaks"], [0, 2, 1, "", "set_keys"], [0, 2, 1, "", "text_ner"], [0, 2, 1, "", "text_sentiment_transformers"], [0, 2, 1, "", "text_summary"], [0, 2, 1, "", "translate_text"]], "utils": [[0, 1, 1, "", "AnalysisMethod"], [0, 1, 1, "", "DownloadResource"], [0, 3, 1, "", "ammico_prefetch_models"], [0, 3, 1, "", "append_data_to_dict"], [0, 3, 1, "", "check_for_missing_keys"], [0, 3, 1, "", "dump_df"], [0, 3, 1, "", "find_files"], [0, 3, 1, "", "get_color_table"], [0, 3, 1, "", "get_dataframe"], [0, 3, 1, "", "initialize_dict"], [0, 3, 1, "", "is_interactive"], [0, 3, 1, "", "iterable"]], "utils.AnalysisMethod": [[0, 2, 1, "", "analyse_image"], [0, 2, 1, "", "set_keys"]], "utils.DownloadResource": [[0, 2, 1, "", "get"], [0, 4, 1, "", "resources"]]}, "objnames": {"0": ["py", "module", "Python module"], "1": ["py", "class", "Python class"], "2": ["py", "method", "Python method"], "3": ["py", "function", "Python function"], "4": ["py", "attribute", "Python attribute"]}, "objtypes": {"0": "py:module", "1": "py:class", "2": "py:method", "3": "py:function", "4": "py:attribute"}, "terms": {"": 5, "0": [0, 2, 7], "00": 6, "03": 5, "1": [0, 1, 2, 8], "10": [5, 6, 7], "100": [0, 5, 6], "1000": [1, 8], "11": 7, "12": [5, 7], "14": 0, "15": [5, 6], "16": 5, "163": 7, "19": 0, "1976": 0, "2": [0, 2], "20": [0, 5], "2022": [3, 7], "2023": 5, "20gb": 5, "23": 7, "240": 0, "240p": 0, "27": 5, "28": 5, "3": [0, 2], "30": [0, 5], "336": 0, "35": 5, "3_non": 5, "4": [2, 7], "5": [0, 5], "50": [0, 1, 5, 8], "5_clip_base_saved_features_imag": 5, "6": [0, 5, 7], "60gb": 5, "61": [5, 6], "7": 7, "7b": [0, 5], "8": [0, 7], "8050": 0, "8055": 5, "8057": 5, "9": 0, "981aa55a3b13": 5, "A": [0, 3, 5], "AND": 3, "AS": 3, "And": 5, "BE": 3, "BUT": 3, "Be": 7, "But": 5, "FOR": 3, "For": [5, 7], "IN": 3, "If": [0, 5, 6, 7], "In": [1, 5, 7, 8], "It": [1, 5, 6, 7, 8], "NO": 3, "NOT": 3, "No": 5, "OF": 3, "OR": 3, "One": 0, "Or": [1, 5, 8], "THE": 3, "TO": 3, "That": 5, "The": [0, 1, 2, 3, 6, 7, 8], "Then": 5, "There": 7, "These": [0, 5, 7], "To": [0, 5, 7], "WITH": 3, "With": 5, "_": 5, "__file__": 7, "_saved_features_imag": 5, "a100": 5, "a4f8f3": 5, "ab": 5, "about": [0, 5, 7], "abov": [3, 5, 7], "abus": 7, "access": 2, "accord": 7, "account": [1, 7, 8], "accur": [5, 7], "action": 3, "activ": 7, "ad": 5, "adapt": 5, "add": [5, 7], "addendum": 7, "addit": 0, "advanc": 5, "af0f99b": 5, "after": [1, 5, 8], "ag": [5, 7], "again": 5, "ai": 2, "albef": 5, "albef_feature_extractor": 0, "algorithm": [0, 5], "all": [0, 2, 3], "all_allowed_model_typ": [0, 4], "allow": [0, 5, 7], "allowed_analysis_typ": [0, 4], "allowed_model_typ": [0, 4], "allowed_new_model_typ": [0, 4], "along": 5, "alreadi": 5, "also": [0, 5, 7], "altern": 5, "american": 5, "ammico": [0, 1, 6, 8], "ammico_data_hom": 5, "ammico_env": 7, "ammico_prefetch_model": [0, 4], "among": 5, "an": [0, 3, 5, 6, 7], "analis": 0, "analys": [0, 2, 7], "analyse_imag": [0, 4, 5], "analyse_quest": [0, 4], "analyse_summari": [0, 4], "analyse_text": [0, 5, 7], "analyse_top": [0, 4], "analysi": [0, 2], "analysis_explor": 5, "analysis_typ": [0, 5], "analysisexplor": [0, 4, 5], "analysismethod": [0, 4], "analyz": [0, 2, 7], "analyze_single_fac": [0, 4], "analyze_text": 0, "anger": 5, "angri": 5, "ani": [0, 1, 3, 5, 7, 8], "anoth": 7, "answer": [0, 5, 7], "api": [0, 2, 5, 7], "app": 5, "append": 0, "append_data_to_dict": [0, 4, 5], "appli": 5, "applic": 6, "approach": 5, "appropri": 0, "approxim": 5, "ar": [0, 2, 5, 6], "architectur": 0, "archiv": 6, "area": 0, "arg": 0, "argument": 5, "aris": 3, "around": [0, 7], "arrai": 0, "art": 7, "as_posix": [5, 6], "asian": 5, "ask": [0, 5], "assign": 5, "associ": [3, 5], "asyncbatchannotatefil": 7, "asyncbatchannotateimag": 7, "asynchron": 7, "att_map": 0, "attent": 0, "author": 3, "automat": [5, 7], "avail": [5, 7], "avif": [0, 5], "avoid": 7, "b": 5, "back": [0, 1, 8], "background": 5, "bar": 5, "base": [0, 5, 7], "base_coco": 0, "batch": [0, 6, 7], "batch_siz": [0, 5], "batchannotatefil": 7, "batchannotateimag": 7, "becaus": [0, 5], "bee": 0, "been": [1, 5, 8], "befor": [0, 5, 7], "being": 5, "below": [0, 5, 6, 7], "bert": 5, "bertop": 0, "best": [5, 7], "best_simularity_value_in_current_search": 5, "better": 5, "between": [0, 5, 7], "beyond": 0, "bigger": 5, "bill": [1, 8], "binari": 0, "bit": 5, "black": [0, 7], "blank": [1, 8], "blip": 5, "blip2": 0, "blip2_coco": [0, 5], "blip2_feature_extractor": 0, "blip2_image_text_match": 0, "blip2_opt_caption_coco_opt2": [0, 5], "blip2_opt_caption_coco_opt6": [0, 5], "blip2_opt_pretrain_opt2": [0, 5], "blip2_opt_pretrain_opt6": [0, 5], "blip2_t5_caption_coco_flant5xl": [0, 5], "blip2_t5_pretrain_flant5xl": [0, 5], "blip2_t5_pretrain_flant5xxl": [0, 5], "blip_bas": 5, "blip_capt": 0, "blip_feature_extractor": 0, "blip_image_text_match": 0, "blip_larg": 5, "blip_vqa": 0, "block": 0, "block_num": 0, "blue": [0, 7], "blur": 0, "bool": [0, 5], "both": [5, 7], "briefli": 7, "bring": [1, 8], "brown": [0, 7], "browser": [1, 8], "build": 7, "c": [0, 3, 7], "calcul": 5, "call": [5, 7], "callback": 0, "campaign": 5, "can": [0, 1, 2, 5, 6, 8], "capabl": 5, "caption": [0, 5, 7], "caption_coco_": 5, "caption_coco_flant5xl": 0, "caption_coco_opt2": 0, "caption_coco_opt6": 0, "card": 5, "care": 7, "carri": [5, 6, 7], "case": 5, "categor": [0, 6], "categori": 5, "cell": [5, 6], "chang": 5, "channel": 7, "charg": [1, 3, 8], "check": [0, 5, 6, 7], "check_for_missing_kei": [0, 4], "check_model": [0, 4], "checkpoint": 5, "choos": [5, 7], "chosen": 0, "cie": 0, "citi": 5, "claim": 3, "class": [0, 5], "classif": 0, "classifi": 7, "clean": [0, 5, 7], "clean_subdict": [0, 4], "clean_text": [0, 4], "cli": [0, 5], "click": [1, 5, 8], "clip_bas": 5, "clip_feature_extractor": 0, "clip_vitl14": 5, "clip_vitl14_336": 5, "closest": 0, "cloud": [0, 2], "cnn": 5, "coco": [0, 5], "code": [5, 6], "colab": [5, 6, 7], "colaboratori": [1, 8], "collect": 7, "color": [0, 2], "color_analysi": [2, 4], "color_bgr2rgb": 6, "colordetector": [0, 4, 5], "colorgram": [0, 5, 7], "colour": [5, 7], "column": [0, 5, 7], "column_kei": [0, 5], "com": [5, 6, 7], "combat": 7, "combin": 5, "come": 5, "command": 7, "comment": [0, 6, 7], "common": 0, "compat": 2, "complet": 5, "compli": 7, "compon": 7, "comput": [0, 1, 5, 8], "computation": 5, "compute_crop_corn": [0, 4], "compute_gradcam_batch": [0, 4], "conceal": 5, "conda": [5, 7], "conda_prefix": 7, "condit": 3, "confer": 5, "confid": 5, "conll03": 5, "connect": [3, 5, 7], "consequenti": 0, "consequential_quest": [0, 5], "consid": [0, 5], "consist": 5, "consol": [1, 8], "const_image_summari": 5, "constant": [0, 5], "contain": [0, 2, 7], "content": [5, 6], "context": 5, "contract": 3, "contrib": 6, "conveni": 5, "convert": [0, 2], "coordin": 0, "copi": 3, "copyright": 3, "corner": [0, 1, 8], "correct": [5, 7], "correspond": 5, "cosequential_quest": 5, "could": 5, "count": 5, "countri": 5, "cover": 5, "cpp": 7, "cpu": [0, 5], "creat": [0, 1, 2, 7, 8], "creation": 5, "crop": [0, 2, 5], "crop_dir": 6, "crop_image_from_post": [0, 4], "crop_media_post": [0, 4, 6], "crop_post": 0, "crop_posts_from_ref": [0, 4, 6], "crop_posts_imag": [0, 4], "crop_view": [0, 6], "croppost": [2, 4, 6, 7], "crpo": 6, "css3": 0, "csv": [0, 2, 7], "csv_encod": 0, "csv_path": [0, 5], "cu11": 7, "cu118": 7, "cuda": [0, 5, 7], "cudatoolkit": 7, "cudnn": 7, "cudnn_path": 7, "current": [0, 1, 7, 8], "current_simularity_valu": 5, "cut": 0, "cv2": [0, 6], "cvtcolor": 6, "cyan": [0, 7], "d": [6, 7], "damag": 3, "dash": [0, 5], "dashboard": [1, 8], "data": [0, 2, 6, 7], "data_out": 5, "data_path": 5, "databas": 7, "datafram": [0, 2], "dataset": 2, "dbmdz": 5, "deactiv": 7, "deal": 3, "deepfac": [5, 7], "deepface_symlink_processor": [0, 4], "default": [0, 5], "defin": 5, "delet": 7, "delta": 0, "delta_e_method": 0, "demand": [0, 5], "demonstr": [2, 7], "depend": [5, 7], "depth": 7, "describ": 7, "descript": [5, 7], "descriptor": 0, "desir": 5, "detail": 5, "detect": [0, 2], "detector": [2, 7], "determin": 0, "determinist": 0, "deterministic_summari": 5, "develop": [5, 7], "devic": 0, "device_typ": [0, 5], "df": 5, "dict": [0, 5], "dictionari": [0, 5], "differ": [5, 6], "directli": [1, 5, 8], "directori": [0, 5], "dirnam": 7, "discard": 5, "disgust": 5, "disk": 7, "displai": [2, 4, 5], "distanc": 7, "distilbart": 5, "distilbert": 5, "distribut": 3, "dmatch": 0, "do": [3, 5, 7], "document": 3, "doe": [0, 5, 7], "doesn": 6, "dog": 5, "domin": 5, "don": 2, "done": [1, 5, 6, 7, 8], "dopamin": 5, "dot": [1, 8], "down": [1, 8], "download": [0, 1, 5, 7, 8], "downloadresourc": [0, 4], "draw_match": [0, 4], "drive": [1, 5, 6, 7, 8], "drop": [1, 8], "dropdown": 5, "due": [5, 6], "dump": [0, 5], "dump_df": [0, 4, 5], "dump_everi": 5, "dump_fil": 5, "e": [0, 7], "each": [0, 5], "easi": 5, "easier": 6, "easili": 5, "echo": 7, "either": [0, 5], "element": [5, 7], "emot": 5, "emotion_threshold": [0, 5], "emotiondetector": [0, 4, 5], "emotitiondetector": 5, "empti": [0, 5], "enabl": [2, 5, 7], "english": [0, 5, 7], "enter": [1, 8], "entiti": [0, 5, 7], "entity_typ": 5, "entri": [0, 5], "enumer": 5, "env_var": 7, "environ": [0, 5], "equip": 5, "error": [0, 5, 7], "estim": 0, "etc": 7, "even": 5, "event": 3, "everi": 5, "everyth": 6, "ex": 7, "exampl": [5, 6], "exclud": 0, "execut": [5, 6], "exist": 5, "exist_ok": 5, "experiment": [5, 7], "explain": 5, "explicitli": 5, "explor": [0, 5], "export": [5, 7], "express": [0, 2, 3, 7], "ext": 0, "extens": [5, 7], "extra": 6, "extrac": 7, "extract": 0, "extract_image_features_bas": [0, 4], "extract_image_features_blip2": [0, 4], "extract_image_features_clip": [0, 4], "extract_text_featur": [0, 4], "f": 6, "f2482bf": 5, "face": [2, 4, 7], "facial": [0, 2, 7], "facial_expression_analysi": [0, 4], "failsaf": 7, "fals": [0, 5, 6], "faq": 2, "fast": 5, "fear": 5, "featur": [0, 2], "feature_extractor": 0, "features_image_stack": [0, 5], "features_text": 0, "few": [5, 7], "field": 0, "figsiz": 6, "figur": 6, "file": [0, 1, 2, 3, 6, 7, 8], "filelist": 0, "filenam": [0, 5], "filepath": 0, "fill": 5, "filter": [0, 5], "filter_number_of_imag": [0, 5], "filter_rel_error": [0, 5], "filter_val_limit": [0, 5], "final_h": 0, "find": [0, 5, 6, 7], "find_fil": [0, 4, 5, 6], "fine": [5, 6], "finetun": 5, "first": [0, 1, 5, 6, 8], "fit": 3, "flag": 5, "flake8": [5, 6], "flan": 0, "flant5": 5, "flant5xl": 5, "flant5xxl": 5, "flex": 5, "float": [0, 5], "folder": [1, 6, 7, 8], "follow": [1, 3, 5, 7, 8], "forg": 7, "form": [0, 5], "format": [0, 5], "found": [0, 5, 6, 7], "fraction": 5, "frame": 0, "framework": 7, "frankfurt": 5, "free": [0, 1, 3, 8], "frequent": 0, "from": [0, 1, 3, 6, 7, 8], "function": [0, 5], "furnish": 3, "further": [2, 7], "g": 7, "gate": 5, "gb": [5, 7], "gbq": 5, "gender": [5, 7], "gener": [0, 2, 5, 7], "get": [0, 1, 4, 5, 7, 8], "get_att_map": [0, 4], "get_color_t": [0, 4], "get_datafram": [0, 4, 5], "get_ipython": [5, 6], "get_pathes_from_queri": [0, 4], "get_text_df": [0, 4], "get_text_dict": [0, 4], "get_text_from_imag": [0, 4], "gif": [0, 5], "git": [5, 6], "github": [5, 6], "give": 5, "given": [0, 5, 6], "global": 0, "go": [1, 8], "googl": [0, 2, 6], "google_application_credenti": [5, 7], "googletran": [5, 7], "gpu": [5, 7], "gradcam": 0, "grant": 3, "graphic": 2, "green": [0, 7], "grei": [0, 7], "h_margin": 0, "ha": [1, 5, 8], "happen": 2, "happi": 5, "have": [2, 5], "head": [5, 6], "heat": 5, "heavi": 5, "held": 7, "here": [1, 5, 7, 8], "herebi": 3, "high": 5, "hold": 5, "holder": 3, "horizont": 0, "hostedtoolcach": 0, "hour": 7, "how": [2, 5], "howev": 5, "hpc": 5, "http": [5, 6], "hug": 7, "huggingfac": 5, "human": 5, "i": [0, 1, 2, 3, 5, 6, 8], "id": [0, 1, 5, 8], "ideal": 5, "identifi": 5, "ignor": [0, 6], "imag": [0, 1, 2, 6, 8], "image_df": 5, "image_dict": 5, "image_example_path": 5, "image_example_queri": 5, "image_gradcam_with_itm": [0, 5], "image_kei": [0, 5], "image_nam": [0, 5], "image_path": 0, "image_summary_detector": 5, "image_summary_vqa_detector": 5, "image_text_match_reord": [0, 4, 5], "images_tensor": 0, "img": [0, 5], "img1": 0, "img2": 0, "img_path": 0, "immedi": 7, "imperfect": 6, "implement": 5, "impli": 3, "import": [2, 6, 7], "importlib_resourc": [5, 6], "improp": 6, "improv": 7, "imread": 6, "imshow": 6, "includ": [0, 3, 5], "incompat": 5, "increment": 5, "index": [0, 2, 7], "indic": 0, "inform": [1, 6, 7, 8], "inherit": 0, "iniati": 5, "init": 5, "initi": [0, 5, 7], "initialize_dict": [0, 4], "input": [0, 2, 7], "insid": 5, "inspect": 2, "instal": [2, 5, 6], "instanc": 5, "instead": 5, "instruct": [2, 5, 7], "int": [0, 5], "intellig": 7, "intens": 5, "interact": [0, 5], "interfac": 2, "internet": 2, "ipynb": 7, "is_interact": [0, 4], "isdir": 6, "item": [5, 6], "iter": [0, 4], "itm": [0, 5], "itm_model": [0, 5], "itm_model_typ": 0, "itm_scor": 5, "itm_scores2": 0, "itm_text_precess": [0, 4], "its": [5, 7], "iulusoi": 5, "jpeg": [0, 5], "jpg": [0, 5], "json": [1, 5, 7, 8], "jupyt": [1, 5, 8], "just": [0, 5, 7], "k": 5, "keep": 6, "kei": [0, 2, 7], "keypoint": 0, "keyword": [5, 7], "kind": 3, "kp1": 0, "kp2": 0, "kp_from_match": [0, 4], "kwarg": 0, "l": [0, 5], "languag": [0, 5, 7], "larg": [0, 5, 7], "large_coco": 0, "largest": 5, "later": 0, "latest": 6, "latter": [5, 7], "launch": 5, "lavi": [0, 5, 7], "ld_library_path": 7, "left": [1, 5, 8], "lemma": 7, "len": 5, "length": 0, "less": 5, "liabil": 3, "liabl": 3, "lib": [0, 7], "librari": [0, 5, 6, 7], "licens": 2, "lida": 5, "like": [1, 5, 8], "likelihood": 5, "limit": [0, 3, 5, 6], "line": 6, "linebreak": [0, 5], "list": [0, 1, 5, 8], "list_of_quest": [0, 5], "live": 7, "llm": 5, "load": [0, 5, 6], "load_dataset": 5, "load_feature_extractor_model_albef": [0, 4], "load_feature_extractor_model_blip": [0, 4], "load_feature_extractor_model_blip2": [0, 4], "load_feature_extractor_model_clip_bas": [0, 4], "load_feature_extractor_model_clip_vitl14": [0, 4], "load_feature_extractor_model_clip_vitl14_336": [0, 4], "load_model": [0, 4], "load_model_bas": [0, 4], "load_model_base_blip2_opt_caption_coco_opt67b": [0, 4], "load_model_base_blip2_opt_pretrain_opt67b": [0, 4], "load_model_blip2_opt_caption_coco_opt27b": [0, 4], "load_model_blip2_opt_pretrain_opt27b": [0, 4], "load_model_blip2_t5_caption_coco_flant5xl": [0, 4], "load_model_blip2_t5_pretrain_flant5xl": [0, 4], "load_model_blip2_t5_pretrain_flant5xxl": [0, 4], "load_model_larg": [0, 4], "load_new_model": [0, 4], "load_tensor": [0, 4], "load_vqa_model": [0, 4], "local": [5, 7], "locat": [5, 7], "log": 7, "login": 5, "look": [0, 1, 5, 6, 8], "loop": 5, "lower": 5, "m": 7, "machin": [5, 7], "made": 7, "mai": [5, 7], "main": [5, 7], "make": [1, 5, 7, 8], "man": 5, "manag": [1, 8], "mani": 5, "manual": 6, "map": [0, 5], "margin": 0, "mask": [0, 5, 7], "match": [0, 5, 6, 7], "matching_point": [0, 4], "matplotlib": 6, "maximum": [0, 5], "mean": 5, "media": [0, 2, 5, 6], "medicin": 5, "memori": [5, 7], "menu": [1, 5, 8], "merchant": 3, "merg": [0, 3], "merge_color": 0, "messag": 5, "metadata": 7, "method": [0, 5], "metric": [0, 7], "microsoft": 7, "might": 0, "min_match": 0, "minimum": [0, 5], "misinform": [2, 5], "miss": 0, "mit": 3, "mkdir": [5, 7], "model": [0, 7], "model_nam": [0, 5], "model_old": 0, "model_typ": [0, 5], "modifi": 3, "modul": [2, 7], "moment": 5, "month": [1, 8], "moral": 6, "more": [1, 5, 7, 8], "most": [0, 5], "mount": [5, 6], "msvc": 7, "much": 5, "multi_features_stack": 0, "multimod": [2, 4, 7], "multimodal_devic": [0, 4], "multimodal_search": [0, 4, 5, 7], "multimodalsearch": [0, 4, 5], "multipl": 5, "multiple_fac": 5, "must": 7, "mv": 6, "my_obj": 5, "mydict": [0, 5], "mydriv": 5, "n": [0, 5, 7], "n_color": 0, "name": [0, 1, 5, 6, 7, 8], "natur": 0, "ndarrai": 0, "necessari": [5, 7], "need": [0, 5, 7], "need_grad_cam": [0, 5], "neg": 5, "ner": 5, "nest": [0, 5], "neutral": 5, "new": [0, 1, 5, 7, 8], "next": 5, "nn": 0, "no_fac": 5, "non": 0, "nondeterministic_summari": 0, "none": [0, 5], "noninfring": 3, "noqa": [5, 6], "note": 5, "notebook": [1, 2, 6, 7, 8], "notic": 3, "now": [1, 5, 6, 8], "np": 0, "num": 5, "number": [0, 5, 7], "number_of_imag": 5, "numer": 5, "numpi": [0, 7], "nvidia": 7, "o": [5, 6, 7], "obj": 5, "object": [0, 5], "obtain": 3, "occur": 0, "off": 0, "offlin": 7, "old": 0, "onc": 5, "one": [0, 5, 7], "ones": 5, "onli": [0, 5, 6, 7], "onlin": 7, "open": 5, "opencv": 6, "oper": 7, "opt": [0, 5], "optim": 5, "option": [0, 5, 7], "orang": [0, 7], "orbax": 5, "order": [5, 7], "origin": [0, 5, 6], "other": [0, 3, 5, 7], "otherwis": [0, 3, 5], "our": [5, 7], "out": [1, 3, 5, 6, 7, 8], "outdict": 5, "output": [0, 2], "outsid": 5, "overal": 5, "overlap": 0, "own": 5, "p": [0, 7], "packag": [0, 2, 6], "page": [1, 2, 7, 8], "paid": 5, "panda": 2, "paper": 7, "paramet": [0, 5], "parent": 5, "pars": 0, "parsing_imag": [0, 4, 5], "part": [0, 6, 7], "parti": 7, "partial": 5, "particular": 3, "pass": 5, "past": 0, "paste_image_and_com": [0, 4], "path": [0, 5, 6], "path_post": 6, "path_ref": 6, "path_to_load_tensor": [0, 5], "path_to_save_tensor": [0, 5], "pathlib": 5, "patient": 5, "pattern": [0, 5], "peopl": 5, "per": [1, 8], "percentag": [0, 5, 7], "perform": [0, 5, 7], "period": 7, "permiss": 3, "permit": 3, "persist": 7, "person": [3, 5, 7], "pick": [1, 8], "pictur": [0, 2], "pil": 0, "pin": 6, "pink": [0, 7], "pip": [5, 6], "pipelin": [0, 5, 7], "pkg": [5, 6], "place": [1, 5, 7, 8], "pleas": [5, 6, 7], "plot": 6, "plt": 6, "plt_crop": [0, 6], "plt_imag": [0, 6], "plt_match": [0, 6], "png": [0, 5, 6], "pngimageplugin": 0, "point": 0, "polar": 7, "politician": 5, "pooch": 0, "pop": [1, 8], "port": [0, 5], "portion": 3, "posit": [0, 5], "possibl": [5, 7], "post": [0, 2, 5], "postprocesstext": [0, 4], "pre": [5, 7], "predict": [5, 7], "prefetch": 0, "prefix": 0, "prepar": 5, "preprocessor": 0, "presenc": 7, "present": [0, 5], "preserv": 5, "press": 5, "pretrain": [0, 5], "pretrain_": 5, "pretrain_opt2": 0, "pretrain_opt6": 0, "prevent": [6, 7], "previou": 5, "primari": 5, "print": [6, 7], "prioriti": 7, "privat": [1, 5, 8], "probabl": 5, "problem": [0, 2], "process": [0, 1, 5, 7, 8], "product": 5, "progress": 5, "project": [1, 7, 8], "prompt": [1, 5, 8], "proper": 7, "proport": 0, "provid": [0, 3, 5, 6, 7], "pt": [0, 5], "public": 7, "publish": 3, "purpl": [0, 7], "purpos": [3, 5], "put": [0, 7], "py": [0, 7], "pycocotool": 7, "pyplot": 6, "python": [0, 1, 5, 6, 7, 8], "python3": 0, "q": 6, "qq": 6, "qqq": [5, 6], "queri": [0, 2, 7], "querys_process": [0, 4], "question": [0, 5, 7], "quit": 5, "race": [5, 7], "race_threshold": [0, 5], "ram": 5, "random": [0, 5], "random_se": [0, 5], "rank": 5, "raw_imag": 0, "raw_img": 0, "re": [5, 6, 7], "read": [0, 2], "read_and_process_imag": [0, 4], "read_and_process_images_itm": [0, 4], "read_csv": [0, 4, 5], "read_img": [0, 4], "receiv": 7, "recogn": 7, "recognit": [0, 5], "recurs": [0, 5], "red": [0, 7], "reduc": [0, 5], "ref": [5, 6], "ref_dir": 6, "ref_fil": [0, 6], "ref_view": [0, 6], "refer": [0, 1, 5, 6, 8], "region": [0, 6], "regist": 0, "rel": 0, "relev": 5, "rememb": 5, "remot": 0, "remov": [0, 6, 7], "remove_linebreak": [0, 4], "reorder": 0, "report": 5, "repositori": 5, "represent": 5, "request": 7, "requir": [0, 5, 7], "rerun": 5, "resiz": 0, "resize_img": [0, 4], "resized_imag": 0, "resourc": [0, 4, 5], "respect": 5, "respons": 7, "restart": 5, "restrict": [3, 7], "result": [0, 7], "retinafac": [5, 7], "return": [0, 5, 7], "return_top": 0, "revis": 5, "revision_numb": [0, 5], "rf": 6, "rgb": 0, "rgb2name": [0, 4], "rgb_ref_view": 6, "rgb_view": 6, "right": [1, 3, 5, 8], "rl": 5, "rm": 6, "row": 5, "run": [0, 1, 5, 6, 7, 8], "run_serv": [0, 4, 5], "runtim": 5, "sad": 5, "same": [5, 6, 7], "sampl": [6, 7], "save": [0, 6], "save_crop_dir": [0, 6], "save_tensor": [0, 4], "saved_features_imag": 0, "saved_tensor": 0, "score": 0, "screen": [1, 8], "screenshot": [1, 8], "script": 7, "sdk": 7, "search": [1, 2, 4, 7, 8], "search_queri": [0, 5], "second": [0, 5], "section": 5, "see": [1, 5, 7, 8], "seed": [0, 5], "seem": 6, "select": [0, 1, 8], "self": 0, "sell": 3, "send": 7, "sent": 2, "sentiment": [0, 5, 7], "sentiment_scor": 5, "separ": 5, "sequenti": 5, "server": [0, 5], "servic": [1, 7, 8], "session": 5, "set": [0, 1, 2, 6, 7, 8], "set_kei": [0, 4], "setuptool": [5, 6], "seven": 5, "sever": [5, 7], "sh": 7, "shade": 0, "shall": 3, "share": 7, "short": 7, "shot": 5, "should": [0, 1, 5, 8], "show": [0, 1, 5, 6, 8], "show_result": [0, 4, 5], "showcas": 5, "shown": 5, "shuffel": 0, "shuffl": [0, 5], "sidebar": 5, "sift": 0, "sign": [1, 8], "signifi": [0, 5], "similar": [0, 5, 7], "simultan": 5, "sinc": 5, "singl": [0, 5], "site": 0, "size": [0, 5, 7], "skip": 5, "skip_extract": [0, 5], "slightli": 5, "slower": 5, "small": 0, "smaller": [0, 5], "so": [0, 3, 5, 7], "social": [0, 5, 6, 7], "softwar": 3, "solv": 2, "some": [5, 6, 7], "someth": [1, 5, 6, 8], "sometim": [0, 5, 6, 7], "sort": 0, "sorted_list": [0, 5], "sourc": [5, 7], "space": 7, "spaci": [5, 7], "special": 5, "specif": [0, 5], "specifi": 5, "speech": 7, "spell": 7, "ssc": 3, "ssciwr": [5, 6], "sshleifer": 5, "sst": 5, "stack": 0, "start": 0, "state": 7, "step": [2, 6, 7], "still": 2, "store": [5, 7], "str": [0, 5, 6], "string": 5, "studio": 7, "style": 5, "subdict": [0, 5], "subdirectori": [0, 5], "subfold": 5, "subject": [3, 7], "sublicens": 3, "subsequ": 5, "substanti": 3, "substitut": 6, "subtract": 0, "suffix": 0, "suitabl": 5, "summar": 5, "summari": [2, 4, 7], "summary_and_quest": [0, 5], "summary_model": 0, "summary_vis_processor": 0, "summary_vqa_model": 0, "summary_vqa_model_new": 0, "summary_vqa_txt_processor": 0, "summary_vqa_txt_processors_new": 0, "summary_vqa_vis_processor": 0, "summary_vqa_vis_processors_new": 0, "summarydetector": [0, 4, 5], "sure": [1, 5, 8], "surpris": 5, "syntax": 5, "system": 0, "t": [2, 6], "t5": 0, "ta": 5, "tab": 5, "tabl": 5, "take": [0, 5], "taken": [0, 5], "task": [5, 7], "tell": 5, "temporarili": 7, "ten": 0, "tensor": [0, 5], "tensorflow": 5, "tesor": 0, "test": [2, 6], "text": [2, 4, 6], "text_clean": [5, 7], "text_df": 5, "text_dict": 5, "text_english": [0, 5, 7], "text_english_correct": 7, "text_input": [0, 5], "text_languag": [5, 7], "text_ner": [0, 4], "text_query_index": 0, "text_sentiment_transform": [0, 4], "text_summari": [0, 4, 5], "textanalyz": [0, 4, 5], "textblob": 7, "textdetector": [0, 4, 5], "textual": 7, "tf": 5, "than": [1, 5, 8], "thei": [0, 5, 7], "them": 5, "therefor": 5, "thi": [0, 1, 3, 5, 6, 7, 8], "third": 7, "three": [1, 5, 8], "threshold": 5, "through": [0, 5], "thrown": 5, "thu": 5, "tiff": [0, 5], "time": [5, 7], "to_csv": 5, "togeth": 0, "toggl": 5, "token": [0, 7], "tokenized_text": 0, "tool": 2, "top": [1, 5, 8], "top1": 5, "topic": [0, 5, 7], "torch": [0, 7], "torchaudio": [5, 7], "torchdata": 5, "torchtext": 5, "torchvis": 7, "tort": 3, "total": 5, "tpu": 5, "tqdm": 5, "train": [0, 5], "transform": [0, 5, 7], "translat": [0, 2], "translate_text": [0, 4], "true": [0, 5, 6, 7], "try": 5, "ttl": 7, "tune": 5, "tupl": 0, "two": [0, 5, 7], "txt_processor": [0, 5], "type": [0, 5, 6], "typic": 7, "ummary_and_quest": 5, "uncas": 5, "uncom": 5, "under": 7, "unecessari": 0, "uninstal": [5, 6], "union": 0, "unrecogn": [0, 7], "unrecogniz": 5, "unzip": 6, "up": [0, 1, 5, 7, 8], "updat": [0, 5], "update_pictur": [0, 4], "upload": [1, 7, 8], "upload_model_blip2_coco": [0, 4], "upload_model_blip_bas": [0, 4], "upload_model_blip_larg": [0, 4], "upon": 5, "url": 7, "us": [0, 1, 2, 3, 8], "usa": 5, "usag": 2, "use_csv": 0, "user": [2, 7], "utf": 0, "util": [2, 4, 6], "v": 7, "v143": 7, "v2": 5, "v_margin": 0, "valu": [0, 5], "variabl": [5, 7], "veri": [5, 7], "version": [5, 6, 7], "vertic": [0, 1, 8], "via": 5, "video": 5, "view": [0, 6], "vis_processor": [0, 5], "vision": [0, 2], "visual": [0, 5, 7], "visual_input": 0, "visualstudio": 7, "vit": [0, 5], "vqa": [0, 5], "vram": 5, "vs_buildtool": 7, "wa": [0, 5, 7], "wai": [5, 6], "want": [5, 7], "warranti": 3, "we": [0, 5, 6], "wear": [0, 5, 7], "wears_mask": [0, 4, 5], "webp": [0, 5], "websit": 7, "well": 5, "wget": 6, "what": [2, 5], "when": [1, 5, 6, 7, 8], "where": [0, 1, 5, 7, 8], "whether": [0, 3], "which": [0, 5, 6, 7], "while": [0, 5], "white": [0, 7], "whitespac": 5, "whl": 7, "whole": 5, "whom": 3, "why": 5, "width": 0, "window": [1, 8], "wish": [1, 8], "without": [0, 3, 5, 7], "won": 7, "word": [0, 5, 7], "work": [0, 5, 6, 7], "world": 5, "worn": 5, "wrapper": 0, "write": [0, 2], "written": [0, 5], "wrong": 6, "x64": [0, 7], "x86": 7, "xl": 0, "xxl": 0, "y": [5, 6], "ye": 5, "yellow": [0, 7], "yet": 0, "you": [1, 5, 6, 7, 8], "your": [1, 2, 6, 7, 8], "zero": 5, "zip": 6}, "titles": ["text module", "Instructions how to generate and enable a google Cloud Vision API key", "Welcome to AMMICO\u2019s documentation!", "License", "AMMICO package modules", "AMMICO Demonstration Notebook", "Crop posts module", "AMMICO - AI Media and Misinformation Content Analysis Tool", "Instructions how to generate and enable a google Cloud Vision API key"], "titleterms": {"": 2, "0": 5, "1": [5, 7], "2": [5, 7], "3": [5, 7], "4": 5, "The": 5, "access": 7, "after": 7, "ai": 7, "all": 5, "ammico": [2, 4, 5, 7], "analys": 5, "analysi": [5, 7], "analyz": 5, "api": [1, 8], "ar": 7, "blip2": 5, "can": 7, "cloud": [1, 5, 7, 8], "color": [5, 7], "color_analysi": 0, "compat": 7, "contain": 5, "content": [2, 7], "convert": 5, "creat": 5, "crop": [6, 7], "croppost": 0, "csv": 5, "data": 5, "datafram": 5, "dataset": 5, "demonstr": 5, "detect": [5, 7], "detector": 5, "displai": 0, "document": 2, "don": 7, "emot": 7, "enabl": [1, 8], "environ": 7, "express": 5, "extract": [5, 7], "face": [0, 5], "facial": 5, "faq": 7, "featur": [5, 7], "file": 5, "first": 7, "folder": 5, "formul": 5, "from": 5, "further": 5, "gener": [1, 8], "googl": [1, 5, 7, 8], "graphic": 5, "happen": 7, "have": 7, "how": [1, 8], "http": 7, "hue": 7, "i": 7, "imag": [5, 7], "import": 5, "improv": 5, "index": 5, "indic": 2, "input": 5, "inspect": 5, "instal": 7, "instruct": [1, 8], "interfac": 5, "internet": 7, "kei": [1, 5, 8], "licens": 3, "media": 7, "micromamba": 7, "misinform": 7, "model": 5, "modul": [0, 4, 5, 6], "multimod": [0, 5], "notebook": 5, "org": 7, "output": 5, "packag": [4, 5, 7], "panda": 5, "pictur": 5, "pip": 7, "post": [6, 7], "prepar": 7, "problem": 7, "pytorch": 7, "queri": 5, "read": 5, "recognit": 7, "result": 5, "right": 7, "save": 5, "search": [0, 5], "second": 7, "select": 5, "sent": 7, "set": 5, "solv": 7, "step": 5, "still": 7, "summari": [0, 5], "t": 7, "tabl": 2, "tensorflow": 7, "test": 5, "text": [0, 5, 7], "tool": 7, "translat": [5, 7], "us": [5, 7], "usag": 7, "user": 5, "util": 0, "vision": [1, 5, 7, 8], "we": 7, "welcom": 2, "what": 7, "window": 7, "write": 5, "www": 7, "your": 5}})
\ No newline at end of file
+Search.setIndex({"alltitles": {"1. First, install tensorflow (https://www.tensorflow.org/install/pip)": [[7, "first-install-tensorflow-https-www-tensorflow-org-install-pip"]], "2. Second, install pytorch": [[7, "second-install-pytorch"]], "3. After we prepared right environment we can install the ammico package": [[7, "after-we-prepared-right-environment-we-can-install-the-ammico-package"]], "AMMICO - AI Media and Misinformation Content Analysis Tool": [[7, "ammico-ai-media-and-misinformation-content-analysis-tool"]], "AMMICO Demonstration Notebook": [[5, "AMMICO-Demonstration-Notebook"]], "AMMICO package modules": [[4, "ammico-package-modules"]], "BLIP2 models": [[5, "BLIP2-models"]], "Color analysis of pictures": [[5, "Color-analysis-of-pictures"]], "Color/hue detection": [[7, "color-hue-detection"]], "Compatibility problems solving": [[7, "compatibility-problems-solving"]], "Content extraction": [[7, "content-extraction"]], "Contents:": [[2, null]], "Crop posts module": [[6, "Crop-posts-module"]], "Cropping of posts": [[7, "cropping-of-posts"]], "Detection of faces and facial expression analysis": [[5, "Detection-of-faces-and-facial-expression-analysis"]], "Emotion recognition": [[7, "emotion-recognition"]], "Ethical disclosure statement": [[5, "Ethical-disclosure-statement"]], "FAQ": [[7, "faq"]], "Features": [[7, "features"]], "Formulate your search queries": [[5, "Formulate-your-search-queries"]], "Further detector modules": [[5, "Further-detector-modules"]], "Image Multimodal Search": [[5, "Image-Multimodal-Search"]], "Image summary and query": [[5, "Image-summary-and-query"]], "Import the ammico package.": [[5, "Import-the-ammico-package."]], "Improve the search results": [[5, "Improve-the-search-results"]], "Indexing and extracting features from images in selected folder": [[5, "Indexing-and-extracting-features-from-images-in-selected-folder"]], "Indices and tables": [[2, "indices-and-tables"]], "Installation": [[7, "installation"]], "Instructions how to generate and enable a google Cloud Vision API key": [[1, "instructions-how-to-generate-and-enable-a-google-cloud-vision-api-key"], [8, "instructions-how-to-generate-and-enable-a-google-cloud-vision-api-key"]], "License": [[3, "license"]], "Micromamba": [[7, "micromamba"]], "Read in a csv file containing text and translating/analysing the text": [[5, "Read-in-a-csv-file-containing-text-and-translating/analysing-the-text"]], "Save search results to csv": [[5, "Save-search-results-to-csv"]], "Step 0: Create and set a Google Cloud Vision Key": [[5, "Step-0:-Create-and-set-a-Google-Cloud-Vision-Key"]], "Step 1: Read your data into AMMICO": [[5, "Step-1:-Read-your-data-into-AMMICO"]], "Step 2: Inspect the input files using the graphical user interface": [[5, "Step-2:-Inspect-the-input-files-using-the-graphical-user-interface"]], "Step 3: Analyze all images": [[5, "Step-3:-Analyze-all-images"]], "Step 4: Convert analysis output to pandas dataframe and write csv": [[5, "Step-4:-Convert-analysis-output-to-pandas-dataframe-and-write-csv"]], "Text extraction": [[7, "text-extraction"]], "The detector modules": [[5, "The-detector-modules"]], "Usage": [[7, "usage"]], "Use a test dataset": [[5, "Use-a-test-dataset"]], "Welcome to AMMICO\u2019s documentation!": [[2, "welcome-to-ammico-s-documentation"]], "What happens if I don\u2019t have internet access - can I still use ammico?": [[7, "what-happens-if-i-don-t-have-internet-access-can-i-still-use-ammico"]], "What happens to the images that are sent to google Cloud Vision?": [[7, "what-happens-to-the-images-that-are-sent-to-google-cloud-vision"]], "What happens to the text that is sent to google Translate?": [[7, "what-happens-to-the-text-that-is-sent-to-google-translate"]], "Windows": [[7, "windows"]], "color_analysis module": [[0, "module-colors"]], "cropposts module": [[0, "module-cropposts"]], "display module": [[0, "module-display"]], "faces module": [[0, "module-faces"]], "multimodal search module": [[0, "module-multimodal_search"]], "summary module": [[0, "module-summary"]], "text module": [[0, "module-text"]], "utils module": [[0, "module-utils"]]}, "docnames": ["ammico", "create_API_key_link", "index", "license_link", "modules", "notebooks/DemoNotebook_ammico", "notebooks/Example cropposts", "readme_link", "set_up_credentials"], "envversion": {"nbsphinx": 4, "sphinx": 61, "sphinx.domains.c": 3, "sphinx.domains.changeset": 1, "sphinx.domains.citation": 1, "sphinx.domains.cpp": 9, "sphinx.domains.index": 1, "sphinx.domains.javascript": 3, "sphinx.domains.math": 2, "sphinx.domains.python": 4, "sphinx.domains.rst": 2, "sphinx.domains.std": 2}, "filenames": ["ammico.rst", "create_API_key_link.md", "index.rst", "license_link.md", "modules.rst", "notebooks/DemoNotebook_ammico.ipynb", "notebooks/Example cropposts.ipynb", "readme_link.md", "set_up_credentials.md"], "indexentries": {"all_allowed_model_types (summary.summarydetector attribute)": [[0, "summary.SummaryDetector.all_allowed_model_types", false]], "allowed_analysis_types (summary.summarydetector attribute)": [[0, "summary.SummaryDetector.allowed_analysis_types", false]], "allowed_model_types (summary.summarydetector attribute)": [[0, "summary.SummaryDetector.allowed_model_types", false]], "allowed_new_model_types (summary.summarydetector attribute)": [[0, "summary.SummaryDetector.allowed_new_model_types", false]], "ammico_prefetch_models() (in module utils)": [[0, "utils.ammico_prefetch_models", false]], "analyse_image() (colors.colordetector method)": [[0, "colors.ColorDetector.analyse_image", false]], "analyse_image() (faces.emotiondetector method)": [[0, "faces.EmotionDetector.analyse_image", false]], "analyse_image() (summary.summarydetector method)": [[0, "summary.SummaryDetector.analyse_image", false]], "analyse_image() (text.textdetector method)": [[0, "text.TextDetector.analyse_image", false]], "analyse_image() (utils.analysismethod method)": [[0, "utils.AnalysisMethod.analyse_image", false]], "analyse_questions() (summary.summarydetector method)": [[0, "summary.SummaryDetector.analyse_questions", false]], "analyse_summary() (summary.summarydetector method)": [[0, "summary.SummaryDetector.analyse_summary", false]], "analyse_topic() (text.postprocesstext method)": [[0, "text.PostprocessText.analyse_topic", false]], "analysisexplorer (class in display)": [[0, "display.AnalysisExplorer", false]], "analysismethod (class in utils)": [[0, "utils.AnalysisMethod", false]], "analyze_single_face() (faces.emotiondetector method)": [[0, "faces.EmotionDetector.analyze_single_face", false]], "append_data_to_dict() (in module utils)": [[0, "utils.append_data_to_dict", false]], "check_for_missing_keys() (in module utils)": [[0, "utils.check_for_missing_keys", false]], "check_model() (summary.summarydetector method)": [[0, "summary.SummaryDetector.check_model", false]], "clean_subdict() (faces.emotiondetector method)": [[0, "faces.EmotionDetector.clean_subdict", false]], "clean_text() (text.textdetector method)": [[0, "text.TextDetector.clean_text", false]], "colordetector (class in colors)": [[0, "colors.ColorDetector", false]], "colors": [[0, "module-colors", false]], "compute_crop_corner() (in module cropposts)": [[0, "cropposts.compute_crop_corner", false]], "compute_gradcam_batch() (multimodal_search.multimodalsearch method)": [[0, "multimodal_search.MultimodalSearch.compute_gradcam_batch", false]], "crop_image_from_post() (in module cropposts)": [[0, "cropposts.crop_image_from_post", false]], "crop_media_posts() (in module cropposts)": [[0, "cropposts.crop_media_posts", false]], "crop_posts_from_refs() (in module cropposts)": [[0, "cropposts.crop_posts_from_refs", false]], "crop_posts_image() (in module cropposts)": [[0, "cropposts.crop_posts_image", false]], "cropposts": [[0, "module-cropposts", false]], "deepface_symlink_processor() (in module faces)": [[0, "faces.deepface_symlink_processor", false]], "display": [[0, "module-display", false]], "downloadresource (class in utils)": [[0, "utils.DownloadResource", false]], "draw_matches() (in module cropposts)": [[0, "cropposts.draw_matches", false]], "dump_df() (in module utils)": [[0, "utils.dump_df", false]], "emotiondetector (class in faces)": [[0, "faces.EmotionDetector", false]], "ethical_disclosure() (in module faces)": [[0, "faces.ethical_disclosure", false]], "extract_image_features_basic() (multimodal_search.multimodalsearch method)": [[0, "multimodal_search.MultimodalSearch.extract_image_features_basic", false]], "extract_image_features_blip2() (multimodal_search.multimodalsearch method)": [[0, "multimodal_search.MultimodalSearch.extract_image_features_blip2", false]], "extract_image_features_clip() (multimodal_search.multimodalsearch method)": [[0, "multimodal_search.MultimodalSearch.extract_image_features_clip", false]], "extract_text_features() (multimodal_search.multimodalsearch method)": [[0, "multimodal_search.MultimodalSearch.extract_text_features", false]], "faces": [[0, "module-faces", false]], "facial_expression_analysis() (faces.emotiondetector method)": [[0, "faces.EmotionDetector.facial_expression_analysis", false]], "find_files() (in module utils)": [[0, "utils.find_files", false]], "get() (utils.downloadresource method)": [[0, "utils.DownloadResource.get", false]], "get_att_map() (multimodal_search.multimodalsearch method)": [[0, "multimodal_search.MultimodalSearch.get_att_map", false]], "get_color_table() (in module utils)": [[0, "utils.get_color_table", false]], "get_dataframe() (in module utils)": [[0, "utils.get_dataframe", false]], "get_pathes_from_query() (multimodal_search.multimodalsearch method)": [[0, "multimodal_search.MultimodalSearch.get_pathes_from_query", false]], "get_text_df() (text.postprocesstext method)": [[0, "text.PostprocessText.get_text_df", false]], "get_text_dict() (text.postprocesstext method)": [[0, "text.PostprocessText.get_text_dict", false]], "get_text_from_image() (text.textdetector method)": [[0, "text.TextDetector.get_text_from_image", false]], "image_text_match_reordering() (multimodal_search.multimodalsearch method)": [[0, "multimodal_search.MultimodalSearch.image_text_match_reordering", false]], "initialize_dict() (in module utils)": [[0, "utils.initialize_dict", false]], "is_interactive() (in module utils)": [[0, "utils.is_interactive", false]], "iterable() (in module utils)": [[0, "utils.iterable", false]], "itm_text_precessing() (multimodal_search.multimodalsearch method)": [[0, "multimodal_search.MultimodalSearch.itm_text_precessing", false]], "kp_from_matches() (in module cropposts)": [[0, "cropposts.kp_from_matches", false]], "load_feature_extractor_model_albef() (multimodal_search.multimodalsearch method)": [[0, "multimodal_search.MultimodalSearch.load_feature_extractor_model_albef", false]], "load_feature_extractor_model_blip() (multimodal_search.multimodalsearch method)": [[0, "multimodal_search.MultimodalSearch.load_feature_extractor_model_blip", false]], "load_feature_extractor_model_blip2() (multimodal_search.multimodalsearch method)": [[0, "multimodal_search.MultimodalSearch.load_feature_extractor_model_blip2", false]], "load_feature_extractor_model_clip_base() (multimodal_search.multimodalsearch method)": [[0, "multimodal_search.MultimodalSearch.load_feature_extractor_model_clip_base", false]], "load_feature_extractor_model_clip_vitl14() (multimodal_search.multimodalsearch method)": [[0, "multimodal_search.MultimodalSearch.load_feature_extractor_model_clip_vitl14", false]], "load_feature_extractor_model_clip_vitl14_336() (multimodal_search.multimodalsearch method)": [[0, "multimodal_search.MultimodalSearch.load_feature_extractor_model_clip_vitl14_336", false]], "load_model() (summary.summarydetector method)": [[0, "summary.SummaryDetector.load_model", false]], "load_model_base() (summary.summarydetector method)": [[0, "summary.SummaryDetector.load_model_base", false]], "load_model_base_blip2_opt_caption_coco_opt67b() (summary.summarydetector method)": [[0, "summary.SummaryDetector.load_model_base_blip2_opt_caption_coco_opt67b", false]], "load_model_base_blip2_opt_pretrain_opt67b() (summary.summarydetector method)": [[0, "summary.SummaryDetector.load_model_base_blip2_opt_pretrain_opt67b", false]], "load_model_blip2_opt_caption_coco_opt27b() (summary.summarydetector method)": [[0, "summary.SummaryDetector.load_model_blip2_opt_caption_coco_opt27b", false]], "load_model_blip2_opt_pretrain_opt27b() (summary.summarydetector method)": [[0, "summary.SummaryDetector.load_model_blip2_opt_pretrain_opt27b", false]], "load_model_blip2_t5_caption_coco_flant5xl() (summary.summarydetector method)": [[0, "summary.SummaryDetector.load_model_blip2_t5_caption_coco_flant5xl", false]], "load_model_blip2_t5_pretrain_flant5xl() (summary.summarydetector method)": [[0, "summary.SummaryDetector.load_model_blip2_t5_pretrain_flant5xl", false]], "load_model_blip2_t5_pretrain_flant5xxl() (summary.summarydetector method)": [[0, "summary.SummaryDetector.load_model_blip2_t5_pretrain_flant5xxl", false]], "load_model_large() (summary.summarydetector method)": [[0, "summary.SummaryDetector.load_model_large", false]], "load_new_model() (summary.summarydetector method)": [[0, "summary.SummaryDetector.load_new_model", false]], "load_tensors() (multimodal_search.multimodalsearch method)": [[0, "multimodal_search.MultimodalSearch.load_tensors", false]], "load_vqa_model() (summary.summarydetector method)": [[0, "summary.SummaryDetector.load_vqa_model", false]], "matching_points() (in module cropposts)": [[0, "cropposts.matching_points", false]], "module": [[0, "module-colors", false], [0, "module-cropposts", false], [0, "module-display", false], [0, "module-faces", false], [0, "module-multimodal_search", false], [0, "module-summary", false], [0, "module-text", false], [0, "module-utils", false]], "multimodal_device (multimodal_search.multimodalsearch attribute)": [[0, "multimodal_search.MultimodalSearch.multimodal_device", false]], "multimodal_search": [[0, "module-multimodal_search", false]], "multimodal_search() (multimodal_search.multimodalsearch method)": [[0, "multimodal_search.MultimodalSearch.multimodal_search", false]], "multimodalsearch (class in multimodal_search)": [[0, "multimodal_search.MultimodalSearch", false]], "parsing_images() (multimodal_search.multimodalsearch method)": [[0, "multimodal_search.MultimodalSearch.parsing_images", false]], "paste_image_and_comment() (in module cropposts)": [[0, "cropposts.paste_image_and_comment", false]], "postprocesstext (class in text)": [[0, "text.PostprocessText", false]], "querys_processing() (multimodal_search.multimodalsearch method)": [[0, "multimodal_search.MultimodalSearch.querys_processing", false]], "read_and_process_images() (multimodal_search.multimodalsearch method)": [[0, "multimodal_search.MultimodalSearch.read_and_process_images", false]], "read_and_process_images_itm() (multimodal_search.multimodalsearch method)": [[0, "multimodal_search.MultimodalSearch.read_and_process_images_itm", false]], "read_csv() (text.textanalyzer method)": [[0, "text.TextAnalyzer.read_csv", false]], "read_img() (multimodal_search.multimodalsearch method)": [[0, "multimodal_search.MultimodalSearch.read_img", false]], "remove_linebreaks() (text.textdetector method)": [[0, "text.TextDetector.remove_linebreaks", false]], "resize_img() (multimodal_search.multimodalsearch method)": [[0, "multimodal_search.MultimodalSearch.resize_img", false]], "resources (utils.downloadresource attribute)": [[0, "utils.DownloadResource.resources", false]], "rgb2name() (colors.colordetector method)": [[0, "colors.ColorDetector.rgb2name", false]], "run_server() (display.analysisexplorer method)": [[0, "display.AnalysisExplorer.run_server", false]], "save_tensors() (multimodal_search.multimodalsearch method)": [[0, "multimodal_search.MultimodalSearch.save_tensors", false]], "set_keys() (colors.colordetector method)": [[0, "colors.ColorDetector.set_keys", false]], "set_keys() (faces.emotiondetector method)": [[0, "faces.EmotionDetector.set_keys", false]], "set_keys() (text.textdetector method)": [[0, "text.TextDetector.set_keys", false]], "set_keys() (utils.analysismethod method)": [[0, "utils.AnalysisMethod.set_keys", false]], "show_results() (multimodal_search.multimodalsearch method)": [[0, "multimodal_search.MultimodalSearch.show_results", false]], "summary": [[0, "module-summary", false]], "summarydetector (class in summary)": [[0, "summary.SummaryDetector", false]], "text": [[0, "module-text", false]], "text_ner() (text.textdetector method)": [[0, "text.TextDetector.text_ner", false]], "text_sentiment_transformers() (text.textdetector method)": [[0, "text.TextDetector.text_sentiment_transformers", false]], "text_summary() (text.textdetector method)": [[0, "text.TextDetector.text_summary", false]], "textanalyzer (class in text)": [[0, "text.TextAnalyzer", false]], "textdetector (class in text)": [[0, "text.TextDetector", false]], "translate_text() (text.textdetector method)": [[0, "text.TextDetector.translate_text", false]], "update_picture() (display.analysisexplorer method)": [[0, "display.AnalysisExplorer.update_picture", false]], "upload_model_blip2_coco() (multimodal_search.multimodalsearch method)": [[0, "multimodal_search.MultimodalSearch.upload_model_blip2_coco", false]], "upload_model_blip_base() (multimodal_search.multimodalsearch method)": [[0, "multimodal_search.MultimodalSearch.upload_model_blip_base", false]], "upload_model_blip_large() (multimodal_search.multimodalsearch method)": [[0, "multimodal_search.MultimodalSearch.upload_model_blip_large", false]], "utils": [[0, "module-utils", false]], "wears_mask() (faces.emotiondetector method)": [[0, "faces.EmotionDetector.wears_mask", false]]}, "objects": {"": [[0, 0, 0, "-", "colors"], [0, 0, 0, "-", "cropposts"], [0, 0, 0, "-", "display"], [0, 0, 0, "-", "faces"], [0, 0, 0, "-", "multimodal_search"], [0, 0, 0, "-", "summary"], [0, 0, 0, "-", "text"], [0, 0, 0, "-", "utils"]], "colors": [[0, 1, 1, "", "ColorDetector"]], "colors.ColorDetector": [[0, 2, 1, "", "analyse_image"], [0, 2, 1, "", "rgb2name"], [0, 2, 1, "", "set_keys"]], "cropposts": [[0, 3, 1, "", "compute_crop_corner"], [0, 3, 1, "", "crop_image_from_post"], [0, 3, 1, "", "crop_media_posts"], [0, 3, 1, "", "crop_posts_from_refs"], [0, 3, 1, "", "crop_posts_image"], [0, 3, 1, "", "draw_matches"], [0, 3, 1, "", "kp_from_matches"], [0, 3, 1, "", "matching_points"], [0, 3, 1, "", "paste_image_and_comment"]], "display": [[0, 1, 1, "", "AnalysisExplorer"]], "display.AnalysisExplorer": [[0, 2, 1, "", "run_server"], [0, 2, 1, "", "update_picture"]], "faces": [[0, 1, 1, "", "EmotionDetector"], [0, 3, 1, "", "deepface_symlink_processor"], [0, 3, 1, "", "ethical_disclosure"]], "faces.EmotionDetector": [[0, 2, 1, "", "analyse_image"], [0, 2, 1, "", "analyze_single_face"], [0, 2, 1, "", "clean_subdict"], [0, 2, 1, "", "facial_expression_analysis"], [0, 2, 1, "", "set_keys"], [0, 2, 1, "", "wears_mask"]], "multimodal_search": [[0, 1, 1, "", "MultimodalSearch"]], "multimodal_search.MultimodalSearch": [[0, 2, 1, "", "compute_gradcam_batch"], [0, 2, 1, "", "extract_image_features_basic"], [0, 2, 1, "", "extract_image_features_blip2"], [0, 2, 1, "", "extract_image_features_clip"], [0, 2, 1, "", "extract_text_features"], [0, 2, 1, "", "get_att_map"], [0, 2, 1, "", "get_pathes_from_query"], [0, 2, 1, "", "image_text_match_reordering"], [0, 2, 1, "", "itm_text_precessing"], [0, 2, 1, "", "load_feature_extractor_model_albef"], [0, 2, 1, "", "load_feature_extractor_model_blip"], [0, 2, 1, "", "load_feature_extractor_model_blip2"], [0, 2, 1, "", "load_feature_extractor_model_clip_base"], [0, 2, 1, "", "load_feature_extractor_model_clip_vitl14"], [0, 2, 1, "", "load_feature_extractor_model_clip_vitl14_336"], [0, 2, 1, "", "load_tensors"], [0, 4, 1, "", "multimodal_device"], [0, 2, 1, "", "multimodal_search"], [0, 2, 1, "", "parsing_images"], [0, 2, 1, "", "querys_processing"], [0, 2, 1, "", "read_and_process_images"], [0, 2, 1, "", "read_and_process_images_itm"], [0, 2, 1, "", "read_img"], [0, 2, 1, "", "resize_img"], [0, 2, 1, "", "save_tensors"], [0, 2, 1, "", "show_results"], [0, 2, 1, "", "upload_model_blip2_coco"], [0, 2, 1, "", "upload_model_blip_base"], [0, 2, 1, "", "upload_model_blip_large"]], "summary": [[0, 1, 1, "", "SummaryDetector"]], "summary.SummaryDetector": [[0, 4, 1, "", "all_allowed_model_types"], [0, 4, 1, "", "allowed_analysis_types"], [0, 4, 1, "", "allowed_model_types"], [0, 4, 1, "", "allowed_new_model_types"], [0, 2, 1, "", "analyse_image"], [0, 2, 1, "", "analyse_questions"], [0, 2, 1, "", "analyse_summary"], [0, 2, 1, "", "check_model"], [0, 2, 1, "", "load_model"], [0, 2, 1, "", "load_model_base"], [0, 2, 1, "", "load_model_base_blip2_opt_caption_coco_opt67b"], [0, 2, 1, "", "load_model_base_blip2_opt_pretrain_opt67b"], [0, 2, 1, "", "load_model_blip2_opt_caption_coco_opt27b"], [0, 2, 1, "", "load_model_blip2_opt_pretrain_opt27b"], [0, 2, 1, "", "load_model_blip2_t5_caption_coco_flant5xl"], [0, 2, 1, "", "load_model_blip2_t5_pretrain_flant5xl"], [0, 2, 1, "", "load_model_blip2_t5_pretrain_flant5xxl"], [0, 2, 1, "", "load_model_large"], [0, 2, 1, "", "load_new_model"], [0, 2, 1, "", "load_vqa_model"]], "text": [[0, 1, 1, "", "PostprocessText"], [0, 1, 1, "", "TextAnalyzer"], [0, 1, 1, "", "TextDetector"]], "text.PostprocessText": [[0, 2, 1, "", "analyse_topic"], [0, 2, 1, "", "get_text_df"], [0, 2, 1, "", "get_text_dict"]], "text.TextAnalyzer": [[0, 2, 1, "", "read_csv"]], "text.TextDetector": [[0, 2, 1, "", "analyse_image"], [0, 2, 1, "", "clean_text"], [0, 2, 1, "", "get_text_from_image"], [0, 2, 1, "", "remove_linebreaks"], [0, 2, 1, "", "set_keys"], [0, 2, 1, "", "text_ner"], [0, 2, 1, "", "text_sentiment_transformers"], [0, 2, 1, "", "text_summary"], [0, 2, 1, "", "translate_text"]], "utils": [[0, 1, 1, "", "AnalysisMethod"], [0, 1, 1, "", "DownloadResource"], [0, 3, 1, "", "ammico_prefetch_models"], [0, 3, 1, "", "append_data_to_dict"], [0, 3, 1, "", "check_for_missing_keys"], [0, 3, 1, "", "dump_df"], [0, 3, 1, "", "find_files"], [0, 3, 1, "", "get_color_table"], [0, 3, 1, "", "get_dataframe"], [0, 3, 1, "", "initialize_dict"], [0, 3, 1, "", "is_interactive"], [0, 3, 1, "", "iterable"]], "utils.AnalysisMethod": [[0, 2, 1, "", "analyse_image"], [0, 2, 1, "", "set_keys"]], "utils.DownloadResource": [[0, 2, 1, "", "get"], [0, 4, 1, "", "resources"]]}, "objnames": {"0": ["py", "module", "Python module"], "1": ["py", "class", "Python class"], "2": ["py", "method", "Python method"], "3": ["py", "function", "Python function"], "4": ["py", "attribute", "Python attribute"]}, "objtypes": {"0": "py:module", "1": "py:class", "2": "py:method", "3": "py:function", "4": "py:attribute"}, "terms": {"": 5, "0": [0, 2, 7], "00": 6, "03": 5, "1": [0, 1, 2, 8], "10": [5, 6, 7], "100": [0, 5, 6], "1000": [1, 8], "11": 7, "12": [5, 7], "14": 0, "15": [5, 6], "16": 5, "163": 7, "19": 0, "1976": 0, "2": [0, 2], "20": [0, 5], "2022": [3, 7], "2023": 5, "20gb": 5, "23": 7, "240": 0, "240p": 0, "27": 5, "28": 5, "3": [0, 2], "30": [0, 5], "336": 0, "35": 5, "3_non": 5, "4": [2, 7], "5": [0, 5], "50": [0, 1, 5, 8], "5_clip_base_saved_features_imag": 5, "6": [0, 5, 7], "60gb": 5, "61": [5, 6], "7": 7, "7b": [0, 5], "8": [0, 7], "8050": 0, "8055": 5, "8057": 5, "9": 0, "981aa55a3b13": 5, "A": [0, 3, 5], "AND": 3, "AS": 3, "And": 5, "BE": 3, "BUT": 3, "Be": 7, "But": 5, "By": 5, "FOR": 3, "For": [5, 7], "IN": 3, "If": [0, 5, 6, 7], "In": [1, 5, 7, 8], "It": [1, 5, 6, 7, 8], "NO": 3, "NOT": 3, "No": 5, "OF": 3, "OR": 3, "One": 0, "Or": [1, 5, 8], "THE": 3, "TO": 3, "That": 5, "The": [0, 1, 2, 3, 6, 7, 8], "Then": 5, "There": 7, "These": [0, 5, 7], "To": [0, 5, 7], "WITH": 3, "With": 5, "_": 5, "__file__": 7, "_saved_features_imag": 5, "a100": 5, "a4f8f3": 5, "ab": 5, "about": [0, 5, 7], "abov": [3, 5, 7], "abus": 7, "accept": [0, 5], "accept_disclosur": [0, 5], "access": 2, "accord": 7, "account": [1, 7, 8], "accur": [5, 7], "action": 3, "activ": 7, "ad": 5, "adapt": 5, "add": [5, 7], "addendum": 7, "addit": 0, "advanc": 5, "af0f99b": 5, "after": [1, 5, 8], "ag": [5, 7], "again": 5, "age_threshold": [0, 5], "ai": 2, "albef": 5, "albef_feature_extractor": 0, "algorithm": [0, 5], "all": [0, 2, 3], "all_allowed_model_typ": [0, 4], "allow": [0, 5, 7], "allowed_analysis_typ": [0, 4], "allowed_model_typ": [0, 4], "allowed_new_model_typ": [0, 4], "along": 5, "alreadi": 5, "also": [0, 5, 7], "altern": 5, "american": 5, "ammico": [0, 1, 6, 8], "ammico_data_hom": 5, "ammico_env": 7, "ammico_prefetch_model": [0, 4], "among": 5, "an": [0, 3, 5, 6, 7], "analis": 0, "analys": [0, 2, 7], "analyse_imag": [0, 4, 5], "analyse_quest": [0, 4], "analyse_summari": [0, 4], "analyse_text": [0, 5, 7], "analyse_top": [0, 4], "analysi": [0, 2], "analysis_explor": 5, "analysis_typ": [0, 5], "analysisexplor": [0, 4, 5], "analysismethod": [0, 4], "analyz": [0, 2, 7], "analyze_single_fac": [0, 4], "analyze_text": 0, "anger": 5, "angri": 5, "ani": [0, 1, 3, 5, 7, 8], "anoth": 7, "answer": [0, 5, 7], "api": [0, 2, 5, 7], "app": 5, "append": 0, "append_data_to_dict": [0, 4, 5], "appli": 5, "applic": 6, "approach": 5, "appropri": 0, "approxim": 5, "ar": [0, 2, 5, 6], "architectur": 0, "archiv": 6, "area": 0, "arg": 0, "argument": 5, "aris": 3, "around": [0, 7], "arrai": 0, "art": 7, "as_posix": 6, "asian": 5, "ask": [0, 5], "assign": 5, "associ": [3, 5], "asyncbatchannotatefil": 7, "asyncbatchannotateimag": 7, "asynchron": 7, "att_map": 0, "attent": 0, "author": 3, "automat": [5, 7], "avail": [5, 7], "avif": [0, 5], "avoid": 7, "awar": 5, "b": 5, "back": [0, 1, 8], "background": 5, "bar": 5, "base": [0, 5, 7], "base_coco": 0, "bashr": 5, "batch": [0, 6, 7], "batch_siz": [0, 5], "batchannotatefil": 7, "batchannotateimag": 7, "becaus": [0, 5], "bee": 0, "been": [1, 5, 8], "befor": [0, 5, 7], "being": 5, "below": [0, 5, 6, 7], "bert": 5, "bertop": 0, "best": [5, 7], "best_simularity_value_in_current_search": 5, "better": 5, "between": [0, 5, 7], "beyond": 0, "bigger": 5, "bill": [1, 8], "binari": 0, "bit": 5, "black": [0, 7], "blank": [1, 8], "blip": 5, "blip2": 0, "blip2_coco": [0, 5], "blip2_feature_extractor": 0, "blip2_image_text_match": 0, "blip2_opt_caption_coco_opt2": [0, 5], "blip2_opt_caption_coco_opt6": [0, 5], "blip2_opt_pretrain_opt2": [0, 5], "blip2_opt_pretrain_opt6": [0, 5], "blip2_t5_caption_coco_flant5xl": [0, 5], "blip2_t5_pretrain_flant5xl": [0, 5], "blip2_t5_pretrain_flant5xxl": [0, 5], "blip_bas": 5, "blip_capt": 0, "blip_feature_extractor": 0, "blip_image_text_match": 0, "blip_larg": 5, "blip_vqa": 0, "block": 0, "block_num": 0, "blue": [0, 7], "blur": 0, "bool": [0, 5], "both": [5, 7], "briefli": 7, "bring": [1, 8], "brown": [0, 7], "browser": [1, 8], "build": 7, "c": [0, 3, 7], "calcul": 5, "call": [5, 7], "callback": 0, "campaign": 5, "can": [0, 1, 2, 5, 6, 8], "capabl": 5, "caption": [0, 5, 7], "caption_coco_": 5, "caption_coco_flant5xl": 0, "caption_coco_opt2": 0, "caption_coco_opt6": 0, "card": 5, "care": 7, "carri": [5, 6, 7], "case": 5, "categor": [0, 6], "categori": 5, "cell": [5, 6], "chang": 5, "channel": 7, "charg": [1, 3, 8], "check": [0, 5, 6, 7], "check_for_missing_kei": [0, 4], "check_model": [0, 4], "checkpoint": 5, "choos": [5, 7], "chosen": 0, "cie": 0, "citi": 5, "claim": 3, "class": [0, 5], "classif": [0, 5], "classifi": 7, "clean": [0, 5, 7], "clean_subdict": [0, 4], "clean_text": [0, 4], "cli": [0, 5], "click": [1, 5, 8], "clip_bas": 5, "clip_feature_extractor": 0, "clip_vitl14": 5, "clip_vitl14_336": 5, "closest": 0, "cloud": [0, 2], "cnn": 5, "coco": [0, 5], "code": [5, 6], "colab": [5, 6, 7], "colaboratori": [1, 8], "collect": 7, "color": [0, 2], "color_analysi": [2, 4], "color_bgr2rgb": 6, "colordetector": [0, 4, 5], "colorgram": [0, 5, 7], "colour": [5, 7], "column": [0, 5, 7], "column_kei": [0, 5], "com": [5, 6, 7], "combat": 7, "combin": 5, "come": 5, "command": 7, "comment": [0, 6, 7], "common": 0, "compat": 2, "complet": 5, "compli": 7, "compon": 7, "comput": [0, 1, 5, 8], "computation": 5, "compute_crop_corn": [0, 4], "compute_gradcam_batch": [0, 4], "conceal": 5, "conda": [5, 7], "conda_prefix": 7, "condit": 3, "confer": 5, "confid": 5, "conll03": 5, "connect": [3, 5, 7], "consequenti": 0, "consequential_quest": [0, 5], "consid": [0, 5], "consist": 5, "consol": [1, 8], "const_image_summari": 5, "constant": [0, 5], "contain": [0, 2, 7], "content": [5, 6], "context": 5, "contract": 3, "contrib": 6, "conveni": 5, "convert": [0, 2], "coordin": 0, "copi": 3, "copyright": 3, "corner": [0, 1, 8], "correct": [5, 7], "correspond": 5, "cosequential_quest": 5, "could": 5, "count": 5, "countri": 5, "cover": 5, "cpp": 7, "cpu": [0, 5], "creat": [0, 1, 2, 7, 8], "creation": 5, "crop": [0, 2, 5], "crop_dir": 6, "crop_image_from_post": [0, 4], "crop_media_post": [0, 4, 6], "crop_post": 0, "crop_posts_from_ref": [0, 4, 6], "crop_posts_imag": [0, 4], "crop_view": [0, 6], "croppost": [2, 4, 6, 7], "crpo": 6, "css3": 0, "csv": [0, 2, 7], "csv_encod": 0, "csv_path": [0, 5], "cu11": 7, "cu118": 7, "cuda": [0, 5, 7], "cudatoolkit": 7, "cudnn": 7, "cudnn_path": 7, "current": [0, 1, 7, 8], "current_simularity_valu": 5, "cut": 0, "cv2": [0, 6], "cvtcolor": 6, "cyan": [0, 7], "d": [6, 7], "damag": 3, "dash": [0, 5], "dashboard": [1, 8], "data": [0, 2, 6, 7], "data_out": 5, "data_path": 5, "databas": 7, "datafram": [0, 2], "dataset": 2, "dbmdz": 5, "deactiv": 7, "deal": 3, "deepfac": [5, 7], "deepface_symlink_processor": [0, 4], "default": [0, 5], "defin": 5, "delet": 7, "delta": 0, "delta_e_method": 0, "demand": [0, 5], "demonstr": [2, 7], "depend": [5, 7], "dependend": 5, "depth": 7, "describ": 7, "descript": [5, 7], "descriptor": 0, "desir": 5, "detail": 5, "detect": [0, 2], "detector": [2, 7], "determin": [0, 5], "determinist": 0, "deterministic_summari": 5, "develop": [5, 7], "devic": 0, "device_typ": [0, 5], "df": 5, "dict": [0, 5], "dictionari": [0, 5], "differ": [5, 6], "directli": [1, 5, 8], "directori": [0, 5], "dirnam": 7, "discard": 5, "disclosur": 0, "disclosure_ammico": [0, 5], "disgust": 5, "disk": 7, "displai": [2, 4, 5], "distanc": 7, "distilbart": 5, "distilbert": 5, "distribut": 3, "dmatch": 0, "do": [3, 5, 7], "document": 3, "doe": [0, 5, 7], "doesn": 6, "dog": 5, "domin": 5, "don": 2, "done": [1, 5, 6, 7, 8], "dopamin": 5, "dot": [1, 8], "down": [1, 8], "download": [0, 1, 5, 7, 8], "downloadresourc": [0, 4], "draw_match": [0, 4], "drive": [1, 5, 6, 7, 8], "drop": [1, 8], "dropdown": 5, "due": [5, 6], "dump": [0, 5], "dump_df": [0, 4, 5], "dump_everi": 5, "dump_fil": 5, "e": [0, 7], "each": [0, 5], "easi": 5, "easier": 6, "easili": 5, "echo": 7, "either": [0, 5], "element": [5, 7], "emot": 5, "emotion_threshold": [0, 5], "emotiondetector": [0, 4, 5], "emotitiondetector": 5, "empti": [0, 5], "enabl": [2, 5, 7], "english": [0, 5, 7], "ensur": 5, "enter": [1, 8], "entiti": [0, 5, 7], "entity_typ": 5, "entri": [0, 5], "enumer": 5, "env_var": 7, "environ": [0, 5], "envorin": 5, "equip": 5, "error": [0, 5, 7], "estim": 0, "etc": 7, "ethic": 0, "ethical_disclosur": [0, 4, 5], "ethnic": 5, "even": 5, "event": 3, "everi": 5, "everyth": 6, "ex": 7, "exampl": [5, 6], "exclud": 0, "execut": [5, 6], "exist": 5, "exist_ok": 5, "experiment": [5, 7], "explain": 5, "explicitli": 5, "explor": [0, 5], "export": [5, 7], "express": [0, 2, 3, 7], "ext": 0, "extens": [5, 7], "extra": 6, "extrac": 7, "extract": 0, "extract_image_features_bas": [0, 4], "extract_image_features_blip2": [0, 4], "extract_image_features_clip": [0, 4], "extract_text_featur": [0, 4], "f": 6, "f2482bf": 5, "face": [2, 4, 7], "facial": [0, 2, 7], "facial_expression_analysi": [0, 4], "failsaf": 7, "fals": [0, 5, 6], "faq": 2, "fast": 5, "fear": 5, "featur": [0, 2], "feature_extractor": 0, "features_image_stack": [0, 5], "features_text": 0, "few": [5, 7], "field": 0, "figsiz": 6, "figur": 6, "file": [0, 1, 2, 3, 6, 7, 8], "filelist": 0, "filenam": [0, 5], "filepath": 0, "fill": 5, "filter": [0, 5], "filter_number_of_imag": [0, 5], "filter_rel_error": [0, 5], "filter_val_limit": [0, 5], "final_h": 0, "find": [0, 5, 6, 7], "find_fil": [0, 4, 5, 6], "fine": [5, 6], "finetun": 5, "first": [0, 1, 5, 6, 8], "fit": 3, "flag": 5, "flake8": [5, 6], "flan": 0, "flant5": 5, "flant5xl": 5, "flant5xxl": 5, "flex": 5, "float": [0, 5], "folder": [1, 6, 7, 8], "follow": [1, 3, 5, 7, 8], "forg": 7, "form": [0, 5], "format": [0, 5], "found": [0, 5, 6, 7], "fraction": 5, "frame": 0, "framework": 7, "frankfurt": 5, "free": [0, 1, 3, 8], "frequent": 0, "from": [0, 1, 3, 6, 7, 8], "full": 5, "function": [0, 5], "furnish": 3, "further": [2, 7], "g": 7, "gate": 5, "gb": [5, 7], "gbq": 5, "gender": [5, 7], "gender_threshold": [0, 5], "gener": [0, 2, 5, 7], "get": [0, 1, 4, 5, 7, 8], "get_att_map": [0, 4], "get_color_t": [0, 4], "get_datafram": [0, 4, 5], "get_ipython": [5, 6], "get_pathes_from_queri": [0, 4], "get_text_df": [0, 4], "get_text_dict": [0, 4], "get_text_from_imag": [0, 4], "gif": [0, 5], "git": [5, 6], "github": [5, 6], "give": 5, "given": [0, 5, 6], "global": 0, "go": [1, 8], "googl": [0, 2, 6], "google_application_credenti": [5, 7], "googletran": [5, 7], "gpu": [5, 7], "gradcam": 0, "grant": 3, "graphic": 2, "green": [0, 7], "grei": [0, 7], "h_margin": 0, "ha": [1, 5, 8], "happen": 2, "happi": 5, "have": [2, 5], "head": [5, 6], "heat": 5, "heavi": 5, "held": 7, "here": [1, 5, 7, 8], "herebi": 3, "high": 5, "hold": 5, "holder": 3, "horizont": 0, "hostedtoolcach": 0, "hour": 7, "how": [2, 5], "howev": 5, "hpc": 5, "http": [5, 6], "hug": 7, "huggingfac": 5, "human": 5, "i": [0, 1, 2, 3, 5, 6, 8], "id": [0, 1, 5, 8], "ideal": 5, "identifi": 5, "ignor": [0, 6], "imag": [0, 1, 2, 6, 8], "image_df": 5, "image_dict": 5, "image_example_path": 5, "image_example_queri": 5, "image_gradcam_with_itm": [0, 5], "image_kei": [0, 5], "image_nam": [0, 5], "image_path": 0, "image_summary_detector": 5, "image_summary_vqa_detector": 5, "image_text_match_reord": [0, 4, 5], "images_tensor": 0, "img": [0, 5], "img1": 0, "img2": 0, "img_path": 0, "immedi": 7, "imperfect": 6, "implement": 5, "impli": 3, "import": [2, 6, 7], "importlib_resourc": [5, 6], "improp": 6, "improv": 7, "imread": 6, "imshow": 6, "includ": [0, 3, 5], "incompat": 5, "increment": 5, "index": [0, 2, 7], "indic": 0, "inform": [1, 6, 7, 8], "inherit": 0, "iniati": 5, "init": 5, "initi": [0, 5, 7], "initialize_dict": [0, 4], "input": [0, 2, 7], "insid": 5, "inspect": 2, "instal": [2, 5, 6], "instanc": 5, "instead": 5, "instruct": [2, 5, 7], "int": [0, 5], "intellig": 7, "intens": 5, "interact": [0, 5], "interfac": 2, "internet": 2, "ipynb": 7, "is_interact": [0, 4], "isdir": 6, "item": [5, 6], "iter": [0, 4], "itm": [0, 5], "itm_model": [0, 5], "itm_model_typ": 0, "itm_scor": 5, "itm_scores2": 0, "itm_text_precess": [0, 4], "its": [5, 7], "iulusoi": 5, "jpeg": [0, 5], "jpg": [0, 5], "json": [1, 5, 7, 8], "jupyt": [1, 5, 8], "just": [0, 5, 7], "k": 5, "keep": 6, "kei": [0, 2, 7], "keypoint": 0, "keyword": [5, 7], "kind": 3, "kp1": 0, "kp2": 0, "kp_from_match": [0, 4], "kwarg": 0, "l": [0, 5], "languag": [0, 5, 7], "larg": [0, 5, 7], "large_coco": 0, "largest": 5, "later": 0, "latest": 6, "latter": [5, 7], "launch": 5, "lavi": [0, 5, 7], "ld_library_path": 7, "left": [1, 5, 8], "lemma": 7, "len": 5, "length": 0, "less": 5, "liabil": 3, "liabl": 3, "lib": [0, 7], "librari": [0, 5, 6, 7], "licens": 2, "lida": 5, "like": [1, 5, 8], "likelihood": 5, "limit": [0, 3, 5, 6], "line": [5, 6], "linebreak": [0, 5], "list": [0, 1, 5, 8], "list_of_quest": [0, 5], "live": 7, "llm": 5, "load": [0, 5, 6], "load_dataset": 5, "load_feature_extractor_model_albef": [0, 4], "load_feature_extractor_model_blip": [0, 4], "load_feature_extractor_model_blip2": [0, 4], "load_feature_extractor_model_clip_bas": [0, 4], "load_feature_extractor_model_clip_vitl14": [0, 4], "load_feature_extractor_model_clip_vitl14_336": [0, 4], "load_model": [0, 4], "load_model_bas": [0, 4], "load_model_base_blip2_opt_caption_coco_opt67b": [0, 4], "load_model_base_blip2_opt_pretrain_opt67b": [0, 4], "load_model_blip2_opt_caption_coco_opt27b": [0, 4], "load_model_blip2_opt_pretrain_opt27b": [0, 4], "load_model_blip2_t5_caption_coco_flant5xl": [0, 4], "load_model_blip2_t5_pretrain_flant5xl": [0, 4], "load_model_blip2_t5_pretrain_flant5xxl": [0, 4], "load_model_larg": [0, 4], "load_new_model": [0, 4], "load_tensor": [0, 4], "load_vqa_model": [0, 4], "local": [5, 7], "locat": [5, 7], "log": 7, "login": 5, "look": [0, 1, 5, 6, 8], "loop": 5, "lower": 5, "m": 7, "machin": [5, 7], "made": [5, 7], "mai": [5, 7], "main": [5, 7], "make": [1, 5, 7, 8], "man": 5, "manag": [1, 8], "mani": 5, "manual": 6, "map": [0, 5], "margin": 0, "mask": [0, 5, 7], "match": [0, 5, 6, 7], "matching_point": [0, 4], "matplotlib": 6, "maximum": [0, 5], "mean": 5, "media": [0, 2, 5, 6], "medicin": 5, "memori": [5, 7], "menu": [1, 5, 8], "merchant": 3, "merg": [0, 3], "merge_color": 0, "messag": 5, "metadata": 7, "method": [0, 5], "metric": [0, 7], "microsoft": 7, "might": 0, "min_match": 0, "minimum": [0, 5], "misinform": [2, 5], "miss": 0, "mit": 3, "mkdir": [5, 7], "model": [0, 7], "model_nam": [0, 5], "model_old": 0, "model_typ": [0, 5], "modifi": 3, "modul": [2, 7], "moment": 5, "month": [1, 8], "moral": 6, "more": [1, 5, 7, 8], "most": [0, 5], "mount": [5, 6], "msvc": 7, "much": 5, "multi_features_stack": 0, "multimod": [2, 4, 7], "multimodal_devic": [0, 4], "multimodal_search": [0, 4, 5, 7], "multimodalsearch": [0, 4, 5], "multipl": 5, "multiple_fac": 5, "must": 7, "mv": 6, "my_obj": 5, "mydict": [0, 5], "mydriv": 5, "n": [0, 5, 7], "n_color": 0, "name": [0, 1, 5, 6, 7, 8], "natur": 0, "ndarrai": 0, "necessari": [5, 7], "need": [0, 5, 7], "need_grad_cam": [0, 5], "neg": 5, "ner": 5, "nest": [0, 5], "neutral": 5, "new": [0, 1, 5, 7, 8], "next": 5, "nn": 0, "no_fac": 5, "non": 0, "nondeterministic_summari": 0, "none": [0, 5], "noninfring": 3, "noqa": [5, 6], "note": 5, "notebook": [1, 2, 6, 7, 8], "notic": 3, "now": [1, 5, 6, 8], "np": 0, "num": 5, "number": [0, 5, 7], "number_of_imag": 5, "numer": 5, "numpi": [0, 7], "nvidia": 7, "o": [5, 6, 7], "obj": 5, "object": [0, 5], "obtain": 3, "occur": 0, "off": 0, "offlin": 7, "old": 0, "onc": 5, "one": [0, 5, 7], "ones": 5, "onli": [0, 5, 6, 7], "onlin": 7, "open": 5, "opencv": 6, "oper": 7, "opt": [0, 5], "optim": 5, "option": [0, 5, 7], "orang": [0, 7], "orbax": 5, "order": [5, 7], "origin": [0, 5, 6], "other": [0, 3, 5, 7], "otherwis": [0, 3, 5], "our": [5, 7], "out": [1, 3, 5, 6, 7, 8], "outdict": 5, "output": [0, 2], "outsid": 5, "overal": 5, "overlap": 0, "own": 5, "p": [0, 7], "packag": [0, 2, 6], "page": [1, 2, 7, 8], "paid": 5, "panda": 2, "paper": 7, "paramet": [0, 5], "parent": 5, "pars": 0, "parsing_imag": [0, 4, 5], "part": [0, 6, 7], "parti": 7, "partial": 5, "particular": 3, "pass": 5, "past": 0, "paste_image_and_com": [0, 4], "path": [0, 5, 6], "path_post": 6, "path_ref": 6, "path_to_load_tensor": [0, 5], "path_to_save_tensor": [0, 5], "pathlib": 5, "patient": 5, "pattern": [0, 5], "peopl": 5, "per": [1, 8], "percentag": [0, 5, 7], "perform": [0, 5, 7], "period": 7, "perman": 5, "permiss": 3, "permit": 3, "persist": 7, "person": [3, 5, 7], "pick": [1, 8], "pictur": [0, 2], "pil": 0, "pin": 6, "pink": [0, 7], "pip": [5, 6], "pipelin": [0, 5, 7], "pkg": [5, 6], "place": [1, 5, 7, 8], "pleas": [5, 6, 7], "plot": 6, "plt": 6, "plt_crop": [0, 6], "plt_imag": [0, 6], "plt_match": [0, 6], "png": [0, 5, 6], "pngimageplugin": 0, "point": 0, "polar": 7, "politician": 5, "pooch": 0, "pop": [1, 5, 8], "port": [0, 5], "portion": 3, "posit": [0, 5], "possibl": [5, 7], "post": [0, 2, 5], "postprocesstext": [0, 4], "pre": [5, 7], "predict": [5, 7], "prefetch": 0, "prefix": 0, "prepar": 5, "preprocessor": 0, "presenc": [5, 7], "present": [0, 5], "preserv": 5, "press": 5, "pretrain": [0, 5], "pretrain_": 5, "pretrain_opt2": 0, "pretrain_opt6": 0, "prevent": [6, 7], "previou": 5, "primari": 5, "print": [6, 7], "prioriti": 7, "privat": [1, 5, 8], "probabl": 5, "problem": [0, 2], "process": [0, 1, 5, 7, 8], "product": 5, "profil": 5, "progress": 5, "project": [1, 7, 8], "prompt": [1, 5, 8], "proper": 7, "proport": 0, "provid": [0, 3, 5, 6, 7], "pt": [0, 5], "public": 7, "publish": 3, "purpl": [0, 7], "purpos": [3, 5], "put": [0, 7], "py": [0, 7], "pycocotool": 7, "pyplot": 6, "python": [0, 1, 5, 6, 7, 8], "python3": 0, "q": 6, "qq": 6, "qqq": [5, 6], "queri": [0, 2, 7], "querys_process": [0, 4], "question": [0, 5, 7], "quit": 5, "race": [5, 7], "race_threshold": [0, 5], "ram": 5, "random": [0, 5], "random_se": [0, 5], "rank": 5, "raw_imag": 0, "raw_img": 0, "re": [5, 6, 7], "read": [0, 2], "read_and_process_imag": [0, 4], "read_and_process_images_itm": [0, 4], "read_csv": [0, 4, 5], "read_img": [0, 4], "receiv": 7, "recogn": 7, "recognit": [0, 5], "recurs": [0, 5], "red": [0, 7], "reduc": [0, 5], "ref": [5, 6], "ref_dir": 6, "ref_fil": [0, 6], "ref_view": [0, 6], "refer": [0, 1, 5, 6, 8], "region": [0, 6], "regist": 0, "reject": 5, "rel": 0, "relev": 5, "rememb": 5, "remot": 0, "remov": [0, 6, 7], "remove_linebreak": [0, 4], "reorder": 0, "report": 5, "repositori": 5, "represent": 5, "request": 7, "requir": [0, 5, 7], "rerun": 5, "resiz": 0, "resize_img": [0, 4], "resized_imag": 0, "resourc": [0, 4, 5], "respect": 5, "respond": 5, "respons": 7, "restart": 5, "restrict": [3, 7], "result": [0, 7], "retinafac": [5, 7], "return": [0, 5, 7], "return_top": 0, "revis": 5, "revision_numb": [0, 5], "rf": 6, "rgb": 0, "rgb2name": [0, 4], "rgb_ref_view": 6, "rgb_view": 6, "right": [1, 3, 5, 8], "rl": 5, "rm": 6, "row": 5, "run": [0, 1, 5, 6, 7, 8], "run_serv": [0, 4, 5], "runtim": 5, "sad": 5, "same": [5, 6, 7], "sampl": [6, 7], "save": [0, 6], "save_crop_dir": [0, 6], "save_tensor": [0, 4], "saved_features_imag": 0, "saved_tensor": 0, "score": 0, "screen": [1, 8], "screenshot": [1, 8], "script": 7, "sdk": 7, "search": [1, 2, 4, 7, 8], "search_queri": [0, 5], "second": [0, 5], "section": 5, "see": [1, 5, 7, 8], "seed": [0, 5], "seem": 6, "select": [0, 1, 8], "self": 0, "sell": 3, "send": 7, "sent": 2, "sentiment": [0, 5, 7], "sentiment_scor": 5, "separ": 5, "sequenti": 5, "server": [0, 5], "servic": [1, 7, 8], "session": 5, "set": [0, 1, 2, 6, 7, 8], "set_kei": [0, 4], "setuptool": [5, 6], "seven": 5, "sever": [5, 7], "sh": 7, "shade": 0, "shall": 3, "share": 7, "shell": 5, "short": 7, "shortcom": 5, "shot": 5, "should": [0, 1, 5, 8], "show": [0, 1, 5, 6, 8], "show_result": [0, 4, 5], "showcas": 5, "shown": 5, "shuffel": 0, "shuffl": [0, 5], "sidebar": 5, "sift": 0, "sign": [1, 8], "signifi": [0, 5], "similar": [0, 5, 7], "simultan": 5, "sinc": 5, "singl": [0, 5], "site": 0, "size": [0, 5, 7], "skip": 5, "skip_extract": [0, 5], "slightli": 5, "slower": 5, "small": 0, "smaller": [0, 5], "so": [0, 3, 5, 7], "social": [0, 5, 6, 7], "softwar": 3, "solv": 2, "some": [5, 6, 7], "someth": [1, 5, 6, 8], "sometim": [0, 5, 6, 7], "sort": 0, "sorted_list": [0, 5], "sourc": [5, 7], "space": 7, "spaci": [5, 7], "special": 5, "specif": [0, 5], "specifi": 5, "speech": 7, "spell": 7, "ssc": 3, "ssciwr": [5, 6], "sshleifer": 5, "sst": 5, "stack": 0, "start": 0, "state": 7, "step": [2, 6, 7], "still": 2, "store": [5, 7], "str": [0, 5, 6], "string": 5, "studio": 7, "style": 5, "subdict": [0, 5], "subdirectori": [0, 5], "subfold": 5, "subject": [3, 7], "sublicens": 3, "subsequ": 5, "substanti": 3, "substitut": 6, "subtract": 0, "suffix": 0, "suitabl": 5, "summar": 5, "summari": [2, 4, 7], "summary_and_quest": [0, 5], "summary_model": 0, "summary_vis_processor": 0, "summary_vqa_model": 0, "summary_vqa_model_new": 0, "summary_vqa_txt_processor": 0, "summary_vqa_txt_processors_new": 0, "summary_vqa_vis_processor": 0, "summary_vqa_vis_processors_new": 0, "summarydetector": [0, 4, 5], "sure": [1, 5, 8], "surpris": 5, "syntax": 5, "system": 0, "t": [2, 6], "t5": 0, "ta": 5, "tab": 5, "tabl": 5, "take": [0, 5], "taken": [0, 5], "task": [5, 7], "tell": 5, "temporarili": 7, "ten": 0, "tensor": [0, 5], "tensorflow": 5, "tesor": 0, "test": [2, 6], "text": [2, 4, 6], "text_clean": [5, 7], "text_df": 5, "text_dict": 5, "text_english": [0, 5, 7], "text_english_correct": 7, "text_input": [0, 5], "text_languag": [5, 7], "text_ner": [0, 4], "text_query_index": 0, "text_sentiment_transform": [0, 4], "text_summari": [0, 4, 5], "textanalyz": [0, 4, 5], "textblob": 7, "textdetector": [0, 4, 5], "textual": 7, "tf": 5, "than": [1, 5, 8], "thei": [0, 5, 7], "them": 5, "therefor": 5, "thi": [0, 1, 3, 5, 6, 7, 8], "third": 7, "three": [1, 5, 8], "threshold": 5, "through": [0, 5], "thrown": 5, "thu": 5, "tiff": [0, 5], "time": [5, 7], "to_csv": 5, "togeth": 0, "toggl": 5, "token": [0, 7], "tokenized_text": 0, "tool": 2, "top": [1, 5, 8], "top1": 5, "topic": [0, 5, 7], "torch": [0, 7], "torchaudio": [5, 7], "torchdata": 5, "torchtext": 5, "torchvis": 7, "tort": 3, "total": 5, "tpu": 5, "tqdm": 5, "train": [0, 5], "transform": [0, 5, 7], "translat": [0, 2], "translate_text": [0, 4], "true": [0, 5, 6, 7], "try": 5, "ttl": 7, "tune": 5, "tupl": 0, "two": [0, 5, 7], "txt_processor": [0, 5], "type": [0, 5, 6], "typic": 7, "ummary_and_quest": 5, "uncas": 5, "uncom": 5, "under": 7, "unecessari": 0, "uninstal": [5, 6], "union": 0, "unrecogn": [0, 7], "unrecogniz": 5, "unset": 5, "unzip": 6, "up": [0, 1, 5, 7, 8], "updat": [0, 5], "update_pictur": [0, 4], "upload": [1, 7, 8], "upload_model_blip2_coco": [0, 4], "upload_model_blip_bas": [0, 4], "upload_model_blip_larg": [0, 4], "upon": 5, "url": 7, "us": [0, 1, 2, 3, 8], "usa": 5, "usag": 2, "use_csv": 0, "user": [0, 2, 7], "utf": 0, "util": [2, 4, 6], "v": 7, "v143": 7, "v2": 5, "v_margin": 0, "valu": [0, 5], "variabl": [0, 5, 7], "veri": [5, 7], "version": [5, 6, 7], "vertic": [0, 1, 8], "via": 5, "video": 5, "view": [0, 6], "vis_processor": [0, 5], "vision": [0, 2], "visual": [0, 5, 7], "visual_input": 0, "visualstudio": 7, "vit": [0, 5], "vqa": [0, 5], "vram": 5, "vs_buildtool": 7, "wa": [0, 5, 7], "wai": [5, 6], "want": [5, 7], "warranti": 3, "we": [0, 5, 6], "wear": [0, 5, 7], "wears_mask": [0, 4, 5], "webp": [0, 5], "websit": 7, "well": 5, "wget": 6, "what": [2, 5], "when": [1, 5, 6, 7, 8], "where": [0, 1, 5, 7, 8], "whether": [0, 3], "which": [0, 5, 6, 7], "while": [0, 5], "white": [0, 7], "whitespac": 5, "whl": 7, "whole": 5, "whom": 3, "why": 5, "width": 0, "window": [1, 8], "wish": [1, 8], "without": [0, 3, 5, 7], "won": 7, "word": [0, 5, 7], "work": [0, 5, 6, 7], "world": 5, "worn": 5, "wrapper": 0, "write": [0, 2], "written": [0, 5], "wrong": 6, "x64": [0, 7], "x86": 7, "xl": 0, "xxl": 0, "y": [5, 6], "ye": 5, "yellow": [0, 7], "yet": 0, "you": [1, 5, 6, 7, 8], "your": [1, 2, 6, 7, 8], "zero": 5, "zip": 6}, "titles": ["text module", "Instructions how to generate and enable a google Cloud Vision API key", "Welcome to AMMICO\u2019s documentation!", "License", "AMMICO package modules", "AMMICO Demonstration Notebook", "Crop posts module", "AMMICO - AI Media and Misinformation Content Analysis Tool", "Instructions how to generate and enable a google Cloud Vision API key"], "titleterms": {"": 2, "0": 5, "1": [5, 7], "2": [5, 7], "3": [5, 7], "4": 5, "The": 5, "access": 7, "after": 7, "ai": 7, "all": 5, "ammico": [2, 4, 5, 7], "analys": 5, "analysi": [5, 7], "analyz": 5, "api": [1, 8], "ar": 7, "blip2": 5, "can": 7, "cloud": [1, 5, 7, 8], "color": [5, 7], "color_analysi": 0, "compat": 7, "contain": 5, "content": [2, 7], "convert": 5, "creat": 5, "crop": [6, 7], "croppost": 0, "csv": 5, "data": 5, "datafram": 5, "dataset": 5, "demonstr": 5, "detect": [5, 7], "detector": 5, "disclosur": 5, "displai": 0, "document": 2, "don": 7, "emot": 7, "enabl": [1, 8], "environ": 7, "ethic": 5, "express": 5, "extract": [5, 7], "face": [0, 5], "facial": 5, "faq": 7, "featur": [5, 7], "file": 5, "first": 7, "folder": 5, "formul": 5, "from": 5, "further": 5, "gener": [1, 8], "googl": [1, 5, 7, 8], "graphic": 5, "happen": 7, "have": 7, "how": [1, 8], "http": 7, "hue": 7, "i": 7, "imag": [5, 7], "import": 5, "improv": 5, "index": 5, "indic": 2, "input": 5, "inspect": 5, "instal": 7, "instruct": [1, 8], "interfac": 5, "internet": 7, "kei": [1, 5, 8], "licens": 3, "media": 7, "micromamba": 7, "misinform": 7, "model": 5, "modul": [0, 4, 5, 6], "multimod": [0, 5], "notebook": 5, "org": 7, "output": 5, "packag": [4, 5, 7], "panda": 5, "pictur": 5, "pip": 7, "post": [6, 7], "prepar": 7, "problem": 7, "pytorch": 7, "queri": 5, "read": 5, "recognit": 7, "result": 5, "right": 7, "save": 5, "search": [0, 5], "second": 7, "select": 5, "sent": 7, "set": 5, "solv": 7, "statement": 5, "step": 5, "still": 7, "summari": [0, 5], "t": 7, "tabl": 2, "tensorflow": 7, "test": 5, "text": [0, 5, 7], "tool": 7, "translat": [5, 7], "us": [5, 7], "usag": 7, "user": 5, "util": 0, "vision": [1, 5, 7, 8], "we": 7, "welcom": 2, "what": 7, "window": 7, "write": 5, "www": 7, "your": 5}})
\ No newline at end of file
diff --git a/source/notebooks/DemoNotebook_ammico.ipynb b/source/notebooks/DemoNotebook_ammico.ipynb
index 292a93d..f6a53da 100644
--- a/source/notebooks/DemoNotebook_ammico.ipynb
+++ b/source/notebooks/DemoNotebook_ammico.ipynb
@@ -166,7 +166,7 @@
"source": [
"image_dict = ammico.find_files(\n",
" # path=\"/content/drive/MyDrive/misinformation-data/\",\n",
- " path=data_path.as_posix(),\n",
+ " path=str(data_path),\n",
" limit=15,\n",
")"
]
@@ -177,7 +177,30 @@
"source": [
"## Step 2: Inspect the input files using the graphical user interface\n",
"A Dash user interface is to select the most suitable options for the analysis, before running a complete analysis on the whole data set. The options for each detector module are explained below in the corresponding sections; for example, different models can be selected that will provide slightly different results. This way, the user can interactively explore which settings provide the most accurate results. In the interface, the nested `image_dict` is passed through the `AnalysisExplorer` class. The interface is run on a specific port which is passed using the `port` keyword; if a port is already in use, it will return an error message, in which case the user should select a different port number. \n",
- "The interface opens a dash app inside the Jupyter Notebook and allows selection of the input file in the top left dropdown menu, as well as selection of the detector type in the top right, with options for each detector type as explained below. The output of the detector is shown directly on the right next to the image. This way, the user can directly inspect how updating the options for each detector changes the computed results, and find the best settings for a production run."
+ "The interface opens a dash app inside the Jupyter Notebook and allows selection of the input file in the top left dropdown menu, as well as selection of the detector type in the top right, with options for each detector type as explained below. The output of the detector is shown directly on the right next to the image. This way, the user can directly inspect how updating the options for each detector changes the computed results, and find the best settings for a production run.\n",
+ "\n",
+ "### Ethical disclosure statement\n",
+ "\n",
+ "If you want to run an analysis using the EmotionDetector detector type, you have first have to respond to an ethical disclosure statement. This disclosure statement ensures that you only use the full capabilities of the EmotionDetector after you have been made aware of its shortcomings.\n",
+ "\n",
+ "For this, answer \"yes\" or \"no\" to the below prompt. This will set an environment variable with the name given as in `accept_disclosure`. To re-run the disclosure prompt, unset the variable by uncommenting the line `os.environ.pop(accept_disclosure, None)`. To permanently set this envorinment variable, add it to your shell via your `.profile` or `.bashr` file.\n",
+ "\n",
+ "If the disclosure statement is accepted, the EmotionDetector will perform age, gender and race/ethnicity classification dependend on the provided thresholds. If the disclosure is rejected, only the presence of faces and emotion (if not wearing a mask) is detected."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# respond to the disclosure statement\n",
+ "# this will set an environment variable for you\n",
+ "# if you do not want to re-accept the disclosure every time, you can set this environment variable in your shell\n",
+ "# to re-set the environment variable, uncomment the below line\n",
+ "accept_disclosure = \"DISCLOSURE_AMMICO\"\n",
+ "# os.environ.pop(accept_disclosure, None)\n",
+ "_ = ammico.ethical_disclosure(accept_disclosure=accept_disclosure)"
]
},
{
@@ -822,7 +845,7 @@
"metadata": {},
"source": [
"## Detection of faces and facial expression analysis\n",
- "Faces and facial expressions are detected and analyzed using the `EmotionDetector` class from the `faces` module. Initially, it is detected if faces are present on the image using RetinaFace, followed by analysis if face masks are worn (Face-Mask-Detection). The detection of age, gender, race, and emotions is carried out with deepface.\n",
+ "Faces and facial expressions are detected and analyzed using the `EmotionDetector` class from the `faces` module. Initially, it is detected if faces are present on the image using RetinaFace, followed by analysis if face masks are worn (Face-Mask-Detection). The detection of age, gender, race, and emotions is carried out with deepface, but only if the disclosure statement has been accepted (see above).\n",
"\n",
" \n",
"\n",
@@ -832,10 +855,11 @@
"\n",
"From the seven facial expressions, an overall dominating emotion category is identified: negative, positive, or neutral emotion. These are defined with the facial expressions angry, disgust, fear and sad for the negative category, happy for the positive category, and surprise and neutral for the neutral category.\n",
"\n",
- "A similar threshold as for the emotion recognition is set for the race detection, `race_threshold`, with the default set to 50% so that a confidence for the race above 0.5 only will return a value in the analysis. \n",
+ "A similar threshold as for the emotion recognition is set for the race/ethnicity, gender and age detection, `race_threshold`, `gender_threshold`, `age_threshold`, with the default set to 50% so that a confidence for the race above 0.5 only will return a value in the analysis. \n",
"\n",
- "Summarizing, the face detection is carried out using the following method call and keywords, where `emotion_threshold` and \n",
- "`race_threshold` are optional:"
+ "You may also pass the name of the environment variable that determines if you accept or reject the ethical disclosure statement. By default, the variable is named `DISCLOSURE_AMMICO`.\n",
+ "\n",
+ "Summarizing, the face detection is carried out using the following method call and keywords, where `emotion_threshold`, `race_threshold`, `gender_threshold`, `age_threshold` are optional:"
]
},
{
@@ -845,7 +869,9 @@
"outputs": [],
"source": [
"for key in image_dict.keys():\n",
- " image_dict[key] = ammico.EmotionDetector(image_dict[key], emotion_threshold=50, race_threshold=50).analyse_image()"
+ " image_dict[key] = ammico.EmotionDetector(image_dict[key], emotion_threshold=50, race_threshold=50,\n",
+ " gender_threshold=50, age_threshold=50, \n",
+ " accept_disclosure=\"DISCLOSURE_AMMICO\").analyse_image()"
]
},
{
|