{
 "cells": [
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Image summary and visual question answering"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "This notebooks shows how to generate image captions and use the visual question answering with [LAVIS](https://github.com/salesforce/LAVIS). \n",
    "\n",
    "The first cell is only run on google colab and installs the [ammico](https://github.com/ssciwr/AMMICO) package.\n",
    "\n",
    "After that, we can import `ammico` and read in the files given a folder path."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {
    "execution": {
     "iopub.execute_input": "2023-08-16T08:17:55.775871Z",
     "iopub.status.busy": "2023-08-16T08:17:55.774778Z",
     "iopub.status.idle": "2023-08-16T08:17:55.787030Z",
     "shell.execute_reply": "2023-08-16T08:17:55.786302Z"
    }
   },
   "outputs": [],
   "source": [
    "# if running on google colab\n",
    "# flake8-noqa-cell\n",
    "import os\n",
    "\n",
    "if \"google.colab\" in str(get_ipython()):\n",
    "    # update python version\n",
    "    # install setuptools\n",
    "    # %pip install setuptools==61 -qqq\n",
    "    # install ammico\n",
    "    %pip install git+https://github.com/ssciwr/ammico.git -qqq\n",
    "    # mount google drive for data and API key\n",
    "    from google.colab import drive\n",
    "\n",
    "    drive.mount(\"/content/drive\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {
    "execution": {
     "iopub.execute_input": "2023-08-16T08:17:55.790790Z",
     "iopub.status.busy": "2023-08-16T08:17:55.790274Z",
     "iopub.status.idle": "2023-08-16T08:18:08.881195Z",
     "shell.execute_reply": "2023-08-16T08:18:08.880406Z"
    },
    "tags": []
   },
   "outputs": [],
   "source": [
    "import ammico\n",
    "from ammico import utils as mutils\n",
    "from ammico import display as mdisplay\n",
    "import ammico.summary as sm"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {
    "execution": {
     "iopub.execute_input": "2023-08-16T08:18:08.885434Z",
     "iopub.status.busy": "2023-08-16T08:18:08.884517Z",
     "iopub.status.idle": "2023-08-16T08:18:08.890596Z",
     "shell.execute_reply": "2023-08-16T08:18:08.889866Z"
    },
    "tags": []
   },
   "outputs": [],
   "source": [
    "# Here you need to provide the path to your google drive folder\n",
    "# or local folder containing the images\n",
    "images = mutils.find_files(\n",
    "    path=\"data/\",\n",
    "    limit=10,\n",
    ")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {
    "execution": {
     "iopub.execute_input": "2023-08-16T08:18:08.894139Z",
     "iopub.status.busy": "2023-08-16T08:18:08.893643Z",
     "iopub.status.idle": "2023-08-16T08:18:08.897326Z",
     "shell.execute_reply": "2023-08-16T08:18:08.896510Z"
    },
    "tags": []
   },
   "outputs": [],
   "source": [
    "mydict = mutils.initialize_dict(images)"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Create captions for images and directly write to csv"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Here you can choose between two models: \"base\" or \"large\". This will generate the caption for each image and directly put the results in a dataframe. This dataframe can be exported as a csv file.\n",
    "\n",
    "The results are written into the columns `const_image_summary` - this will always be the same result (as always the same seed will be used). The column `3_non-deterministic summary` displays three different answers generated with different seeds, these are most likely different when you run the analysis again."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "metadata": {
    "execution": {
     "iopub.execute_input": "2023-08-16T08:18:08.901139Z",
     "iopub.status.busy": "2023-08-16T08:18:08.900636Z",
     "iopub.status.idle": "2023-08-16T08:18:56.282145Z",
     "shell.execute_reply": "2023-08-16T08:18:56.280793Z"
    },
    "tags": []
   },
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "'(ReadTimeoutError(\"HTTPSConnectionPool(host='huggingface.co', port=443): Read timed out. (read timeout=10)\"), '(Request ID: 46e7044b-81c7-40dd-96d4-f00f47977854)')' thrown while requesting HEAD https://huggingface.co/bert-base-uncased/resolve/main/vocab.txt\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "'(ReadTimeoutError(\"HTTPSConnectionPool(host='huggingface.co', port=443): Read timed out. (read timeout=10)\"), '(Request ID: 60a0dbec-fe0d-4c08-ab66-81694c369895)')' thrown while requesting HEAD https://huggingface.co/bert-base-uncased/resolve/main/added_tokens.json\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "'(ReadTimeoutError(\"HTTPSConnectionPool(host='huggingface.co', port=443): Read timed out. (read timeout=10)\"), '(Request ID: dc7187e1-ebbb-4567-af33-a6895b248a79)')' thrown while requesting HEAD https://huggingface.co/bert-base-uncased/resolve/main/special_tokens_map.json\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "'(ReadTimeoutError(\"HTTPSConnectionPool(host='huggingface.co', port=443): Read timed out. (read timeout=10)\"), '(Request ID: 59a281a5-bebf-4eda-9a26-a35771174657)')' thrown while requesting HEAD https://huggingface.co/bert-base-uncased/resolve/main/tokenizer_config.json\n"
     ]
    },
    {
     "ename": "OSError",
     "evalue": "Can't load tokenizer for 'bert-base-uncased'. If you were trying to load it from 'https://huggingface.co/models', make sure you don't have a local directory with the same name. Otherwise, make sure 'bert-base-uncased' is the correct path to a directory containing all relevant files for a BertTokenizer tokenizer.",
     "output_type": "error",
     "traceback": [
      "\u001b[0;31m---------------------------------------------------------------------------\u001b[0m",
      "\u001b[0;31mOSError\u001b[0m                                   Traceback (most recent call last)",
      "Cell \u001b[0;32mIn[5], line 1\u001b[0m\n\u001b[0;32m----> 1\u001b[0m obj \u001b[38;5;241m=\u001b[39m \u001b[43msm\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mSummaryDetector\u001b[49m\u001b[43m(\u001b[49m\u001b[43mmydict\u001b[49m\u001b[43m)\u001b[49m\n\u001b[1;32m      2\u001b[0m summary_model, summary_vis_processors \u001b[38;5;241m=\u001b[39m obj\u001b[38;5;241m.\u001b[39mload_model(model_type\u001b[38;5;241m=\u001b[39m\u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mbase\u001b[39m\u001b[38;5;124m\"\u001b[39m)\n\u001b[1;32m      3\u001b[0m \u001b[38;5;66;03m# summary_model, summary_vis_processors = mutils.load_model(\"large\")\u001b[39;00m\n",
      "File \u001b[0;32m~/work/AMMICO/AMMICO/ammico/summary.py:72\u001b[0m, in \u001b[0;36mSummaryDetector.__init__\u001b[0;34m(self, subdict, summary_model_type, analysis_type, list_of_questions, summary_model, summary_vis_processors, summary_vqa_model, summary_vqa_vis_processors, summary_vqa_txt_processors)\u001b[0m\n\u001b[1;32m     66\u001b[0m     \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39mlist_of_questions \u001b[38;5;241m=\u001b[39m list_of_questions\n\u001b[1;32m     67\u001b[0m \u001b[38;5;28;01mif\u001b[39;00m (\n\u001b[1;32m     68\u001b[0m     (summary_model \u001b[38;5;129;01mis\u001b[39;00m \u001b[38;5;28;01mNone\u001b[39;00m)\n\u001b[1;32m     69\u001b[0m     \u001b[38;5;129;01mand\u001b[39;00m (summary_vis_processors \u001b[38;5;129;01mis\u001b[39;00m \u001b[38;5;28;01mNone\u001b[39;00m)\n\u001b[1;32m     70\u001b[0m     \u001b[38;5;129;01mand\u001b[39;00m (analysis_type \u001b[38;5;241m!=\u001b[39m \u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mquestions\u001b[39m\u001b[38;5;124m\"\u001b[39m)\n\u001b[1;32m     71\u001b[0m ):\n\u001b[0;32m---> 72\u001b[0m     \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39msummary_model, \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39msummary_vis_processors \u001b[38;5;241m=\u001b[39m \u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mload_model\u001b[49m\u001b[43m(\u001b[49m\n\u001b[1;32m     73\u001b[0m \u001b[43m        \u001b[49m\u001b[43mmodel_type\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43msummary_model_type\u001b[49m\n\u001b[1;32m     74\u001b[0m \u001b[43m    \u001b[49m\u001b[43m)\u001b[49m\n\u001b[1;32m     75\u001b[0m \u001b[38;5;28;01melse\u001b[39;00m:\n\u001b[1;32m     76\u001b[0m     \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39msummary_model \u001b[38;5;241m=\u001b[39m summary_model\n",
      "File \u001b[0;32m~/work/AMMICO/AMMICO/ammico/summary.py:145\u001b[0m, in \u001b[0;36mSummaryDetector.load_model\u001b[0;34m(self, model_type)\u001b[0m\n\u001b[1;32m    131\u001b[0m \u001b[38;5;250m\u001b[39m\u001b[38;5;124;03m\"\"\"\u001b[39;00m\n\u001b[1;32m    132\u001b[0m \u001b[38;5;124;03mLoad blip_caption model and preprocessors for visual inputs from lavis.models.\u001b[39;00m\n\u001b[1;32m    133\u001b[0m \n\u001b[0;32m   (...)\u001b[0m\n\u001b[1;32m    139\u001b[0m \u001b[38;5;124;03m    vis_processors (dict): preprocessors for visual inputs.\u001b[39;00m\n\u001b[1;32m    140\u001b[0m \u001b[38;5;124;03m\"\"\"\u001b[39;00m\n\u001b[1;32m    141\u001b[0m select_model \u001b[38;5;241m=\u001b[39m {\n\u001b[1;32m    142\u001b[0m     \u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mbase\u001b[39m\u001b[38;5;124m\"\u001b[39m: SummaryDetector\u001b[38;5;241m.\u001b[39mload_model_base,\n\u001b[1;32m    143\u001b[0m     \u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mlarge\u001b[39m\u001b[38;5;124m\"\u001b[39m: SummaryDetector\u001b[38;5;241m.\u001b[39mload_model_large,\n\u001b[1;32m    144\u001b[0m }\n\u001b[0;32m--> 145\u001b[0m summary_model, summary_vis_processors \u001b[38;5;241m=\u001b[39m \u001b[43mselect_model\u001b[49m\u001b[43m[\u001b[49m\u001b[43mmodel_type\u001b[49m\u001b[43m]\u001b[49m\u001b[43m(\u001b[49m\u001b[38;5;28;43mself\u001b[39;49m\u001b[43m)\u001b[49m\n\u001b[1;32m    146\u001b[0m \u001b[38;5;28;01mreturn\u001b[39;00m summary_model, summary_vis_processors\n",
      "File \u001b[0;32m~/work/AMMICO/AMMICO/ammico/summary.py:104\u001b[0m, in \u001b[0;36mSummaryDetector.load_model_base\u001b[0;34m(self)\u001b[0m\n\u001b[1;32m     94\u001b[0m \u001b[38;5;28;01mdef\u001b[39;00m \u001b[38;5;21mload_model_base\u001b[39m(\u001b[38;5;28mself\u001b[39m):\n\u001b[1;32m     95\u001b[0m \u001b[38;5;250m    \u001b[39m\u001b[38;5;124;03m\"\"\"\u001b[39;00m\n\u001b[1;32m     96\u001b[0m \u001b[38;5;124;03m    Load base_coco blip_caption model and preprocessors for visual inputs from lavis.models.\u001b[39;00m\n\u001b[1;32m     97\u001b[0m \n\u001b[0;32m   (...)\u001b[0m\n\u001b[1;32m    102\u001b[0m \u001b[38;5;124;03m        vis_processors (dict): preprocessors for visual inputs.\u001b[39;00m\n\u001b[1;32m    103\u001b[0m \u001b[38;5;124;03m    \"\"\"\u001b[39;00m\n\u001b[0;32m--> 104\u001b[0m     summary_model, summary_vis_processors, _ \u001b[38;5;241m=\u001b[39m \u001b[43mload_model_and_preprocess\u001b[49m\u001b[43m(\u001b[49m\n\u001b[1;32m    105\u001b[0m \u001b[43m        \u001b[49m\u001b[43mname\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[38;5;124;43mblip_caption\u001b[39;49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[43m,\u001b[49m\n\u001b[1;32m    106\u001b[0m \u001b[43m        \u001b[49m\u001b[43mmodel_type\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[38;5;124;43mbase_coco\u001b[39;49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[43m,\u001b[49m\n\u001b[1;32m    107\u001b[0m \u001b[43m        \u001b[49m\u001b[43mis_eval\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[38;5;28;43;01mTrue\u001b[39;49;00m\u001b[43m,\u001b[49m\n\u001b[1;32m    108\u001b[0m \u001b[43m        \u001b[49m\u001b[43mdevice\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43msummary_device\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m    109\u001b[0m \u001b[43m    \u001b[49m\u001b[43m)\u001b[49m\n\u001b[1;32m    110\u001b[0m     \u001b[38;5;28;01mreturn\u001b[39;00m summary_model, summary_vis_processors\n",
      "File \u001b[0;32m/opt/hostedtoolcache/Python/3.9.17/x64/lib/python3.9/site-packages/lavis/models/__init__.py:195\u001b[0m, in \u001b[0;36mload_model_and_preprocess\u001b[0;34m(name, model_type, is_eval, device)\u001b[0m\n\u001b[1;32m    192\u001b[0m model_cls \u001b[38;5;241m=\u001b[39m registry\u001b[38;5;241m.\u001b[39mget_model_class(name)\n\u001b[1;32m    194\u001b[0m \u001b[38;5;66;03m# load model\u001b[39;00m\n\u001b[0;32m--> 195\u001b[0m model \u001b[38;5;241m=\u001b[39m \u001b[43mmodel_cls\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mfrom_pretrained\u001b[49m\u001b[43m(\u001b[49m\u001b[43mmodel_type\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mmodel_type\u001b[49m\u001b[43m)\u001b[49m\n\u001b[1;32m    197\u001b[0m \u001b[38;5;28;01mif\u001b[39;00m is_eval:\n\u001b[1;32m    198\u001b[0m     model\u001b[38;5;241m.\u001b[39meval()\n",
      "File \u001b[0;32m/opt/hostedtoolcache/Python/3.9.17/x64/lib/python3.9/site-packages/lavis/models/base_model.py:70\u001b[0m, in \u001b[0;36mBaseModel.from_pretrained\u001b[0;34m(cls, model_type)\u001b[0m\n\u001b[1;32m     60\u001b[0m \u001b[38;5;250m\u001b[39m\u001b[38;5;124;03m\"\"\"\u001b[39;00m\n\u001b[1;32m     61\u001b[0m \u001b[38;5;124;03mBuild a pretrained model from default configuration file, specified by model_type.\u001b[39;00m\n\u001b[1;32m     62\u001b[0m \n\u001b[0;32m   (...)\u001b[0m\n\u001b[1;32m     67\u001b[0m \u001b[38;5;124;03m    - model (nn.Module): pretrained or finetuned model, depending on the configuration.\u001b[39;00m\n\u001b[1;32m     68\u001b[0m \u001b[38;5;124;03m\"\"\"\u001b[39;00m\n\u001b[1;32m     69\u001b[0m model_cfg \u001b[38;5;241m=\u001b[39m OmegaConf\u001b[38;5;241m.\u001b[39mload(\u001b[38;5;28mcls\u001b[39m\u001b[38;5;241m.\u001b[39mdefault_config_path(model_type))\u001b[38;5;241m.\u001b[39mmodel\n\u001b[0;32m---> 70\u001b[0m model \u001b[38;5;241m=\u001b[39m \u001b[38;5;28;43mcls\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mfrom_config\u001b[49m\u001b[43m(\u001b[49m\u001b[43mmodel_cfg\u001b[49m\u001b[43m)\u001b[49m\n\u001b[1;32m     72\u001b[0m \u001b[38;5;28;01mreturn\u001b[39;00m model\n",
      "File \u001b[0;32m/opt/hostedtoolcache/Python/3.9.17/x64/lib/python3.9/site-packages/lavis/models/blip_models/blip_caption.py:216\u001b[0m, in \u001b[0;36mBlipCaption.from_config\u001b[0;34m(cls, cfg)\u001b[0m\n\u001b[1;32m    213\u001b[0m prompt \u001b[38;5;241m=\u001b[39m cfg\u001b[38;5;241m.\u001b[39mget(\u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mprompt\u001b[39m\u001b[38;5;124m\"\u001b[39m, \u001b[38;5;28;01mNone\u001b[39;00m)\n\u001b[1;32m    214\u001b[0m max_txt_len \u001b[38;5;241m=\u001b[39m cfg\u001b[38;5;241m.\u001b[39mget(\u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mmax_txt_len\u001b[39m\u001b[38;5;124m\"\u001b[39m, \u001b[38;5;241m40\u001b[39m)\n\u001b[0;32m--> 216\u001b[0m model \u001b[38;5;241m=\u001b[39m \u001b[38;5;28;43mcls\u001b[39;49m\u001b[43m(\u001b[49m\u001b[43mimage_encoder\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mtext_decoder\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mprompt\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mprompt\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mmax_txt_len\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mmax_txt_len\u001b[49m\u001b[43m)\u001b[49m\n\u001b[1;32m    217\u001b[0m model\u001b[38;5;241m.\u001b[39mload_checkpoint_from_config(cfg)\n\u001b[1;32m    219\u001b[0m \u001b[38;5;28;01mreturn\u001b[39;00m model\n",
      "File \u001b[0;32m/opt/hostedtoolcache/Python/3.9.17/x64/lib/python3.9/site-packages/lavis/models/blip_models/blip_caption.py:43\u001b[0m, in \u001b[0;36mBlipCaption.__init__\u001b[0;34m(self, image_encoder, text_decoder, prompt, max_txt_len)\u001b[0m\n\u001b[1;32m     40\u001b[0m \u001b[38;5;28;01mdef\u001b[39;00m \u001b[38;5;21m__init__\u001b[39m(\u001b[38;5;28mself\u001b[39m, image_encoder, text_decoder, prompt\u001b[38;5;241m=\u001b[39m\u001b[38;5;28;01mNone\u001b[39;00m, max_txt_len\u001b[38;5;241m=\u001b[39m\u001b[38;5;241m40\u001b[39m):\n\u001b[1;32m     41\u001b[0m     \u001b[38;5;28msuper\u001b[39m()\u001b[38;5;241m.\u001b[39m\u001b[38;5;21m__init__\u001b[39m()\n\u001b[0;32m---> 43\u001b[0m     \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39mtokenizer \u001b[38;5;241m=\u001b[39m \u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43minit_tokenizer\u001b[49m\u001b[43m(\u001b[49m\u001b[43m)\u001b[49m\n\u001b[1;32m     45\u001b[0m     \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39mvisual_encoder \u001b[38;5;241m=\u001b[39m image_encoder\n\u001b[1;32m     46\u001b[0m     \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39mtext_decoder \u001b[38;5;241m=\u001b[39m text_decoder\n",
      "File \u001b[0;32m/opt/hostedtoolcache/Python/3.9.17/x64/lib/python3.9/site-packages/lavis/models/blip_models/blip.py:22\u001b[0m, in \u001b[0;36mBlipBase.init_tokenizer\u001b[0;34m(cls)\u001b[0m\n\u001b[1;32m     20\u001b[0m \u001b[38;5;129m@classmethod\u001b[39m\n\u001b[1;32m     21\u001b[0m \u001b[38;5;28;01mdef\u001b[39;00m \u001b[38;5;21minit_tokenizer\u001b[39m(\u001b[38;5;28mcls\u001b[39m):\n\u001b[0;32m---> 22\u001b[0m     tokenizer \u001b[38;5;241m=\u001b[39m \u001b[43mBertTokenizer\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mfrom_pretrained\u001b[49m\u001b[43m(\u001b[49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[38;5;124;43mbert-base-uncased\u001b[39;49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[43m)\u001b[49m\n\u001b[1;32m     23\u001b[0m     tokenizer\u001b[38;5;241m.\u001b[39madd_special_tokens({\u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mbos_token\u001b[39m\u001b[38;5;124m\"\u001b[39m: \u001b[38;5;124m\"\u001b[39m\u001b[38;5;124m[DEC]\u001b[39m\u001b[38;5;124m\"\u001b[39m})\n\u001b[1;32m     24\u001b[0m     tokenizer\u001b[38;5;241m.\u001b[39madd_special_tokens({\u001b[38;5;124m\"\u001b[39m\u001b[38;5;124madditional_special_tokens\u001b[39m\u001b[38;5;124m\"\u001b[39m: [\u001b[38;5;124m\"\u001b[39m\u001b[38;5;124m[ENC]\u001b[39m\u001b[38;5;124m\"\u001b[39m]})\n",
      "File \u001b[0;32m/opt/hostedtoolcache/Python/3.9.17/x64/lib/python3.9/site-packages/transformers/tokenization_utils_base.py:1788\u001b[0m, in \u001b[0;36mPreTrainedTokenizerBase.from_pretrained\u001b[0;34m(cls, pretrained_model_name_or_path, *init_inputs, **kwargs)\u001b[0m\n\u001b[1;32m   1782\u001b[0m     logger\u001b[38;5;241m.\u001b[39minfo(\n\u001b[1;32m   1783\u001b[0m         \u001b[38;5;124mf\u001b[39m\u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mCan\u001b[39m\u001b[38;5;124m'\u001b[39m\u001b[38;5;124mt load following files from cache: \u001b[39m\u001b[38;5;132;01m{\u001b[39;00munresolved_files\u001b[38;5;132;01m}\u001b[39;00m\u001b[38;5;124m and cannot check if these \u001b[39m\u001b[38;5;124m\"\u001b[39m\n\u001b[1;32m   1784\u001b[0m         \u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mfiles are necessary for the tokenizer to operate.\u001b[39m\u001b[38;5;124m\"\u001b[39m\n\u001b[1;32m   1785\u001b[0m     )\n\u001b[1;32m   1787\u001b[0m \u001b[38;5;28;01mif\u001b[39;00m \u001b[38;5;28mall\u001b[39m(full_file_name \u001b[38;5;129;01mis\u001b[39;00m \u001b[38;5;28;01mNone\u001b[39;00m \u001b[38;5;28;01mfor\u001b[39;00m full_file_name \u001b[38;5;129;01min\u001b[39;00m resolved_vocab_files\u001b[38;5;241m.\u001b[39mvalues()):\n\u001b[0;32m-> 1788\u001b[0m     \u001b[38;5;28;01mraise\u001b[39;00m \u001b[38;5;167;01mEnvironmentError\u001b[39;00m(\n\u001b[1;32m   1789\u001b[0m         \u001b[38;5;124mf\u001b[39m\u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mCan\u001b[39m\u001b[38;5;124m'\u001b[39m\u001b[38;5;124mt load tokenizer for \u001b[39m\u001b[38;5;124m'\u001b[39m\u001b[38;5;132;01m{\u001b[39;00mpretrained_model_name_or_path\u001b[38;5;132;01m}\u001b[39;00m\u001b[38;5;124m'\u001b[39m\u001b[38;5;124m. If you were trying to load it from \u001b[39m\u001b[38;5;124m\"\u001b[39m\n\u001b[1;32m   1790\u001b[0m         \u001b[38;5;124m\"\u001b[39m\u001b[38;5;124m'\u001b[39m\u001b[38;5;124mhttps://huggingface.co/models\u001b[39m\u001b[38;5;124m'\u001b[39m\u001b[38;5;124m, make sure you don\u001b[39m\u001b[38;5;124m'\u001b[39m\u001b[38;5;124mt have a local directory with the same name. \u001b[39m\u001b[38;5;124m\"\u001b[39m\n\u001b[1;32m   1791\u001b[0m         \u001b[38;5;124mf\u001b[39m\u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mOtherwise, make sure \u001b[39m\u001b[38;5;124m'\u001b[39m\u001b[38;5;132;01m{\u001b[39;00mpretrained_model_name_or_path\u001b[38;5;132;01m}\u001b[39;00m\u001b[38;5;124m'\u001b[39m\u001b[38;5;124m is the correct path to a directory \u001b[39m\u001b[38;5;124m\"\u001b[39m\n\u001b[1;32m   1792\u001b[0m         \u001b[38;5;124mf\u001b[39m\u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mcontaining all relevant files for a \u001b[39m\u001b[38;5;132;01m{\u001b[39;00m\u001b[38;5;28mcls\u001b[39m\u001b[38;5;241m.\u001b[39m\u001b[38;5;18m__name__\u001b[39m\u001b[38;5;132;01m}\u001b[39;00m\u001b[38;5;124m tokenizer.\u001b[39m\u001b[38;5;124m\"\u001b[39m\n\u001b[1;32m   1793\u001b[0m     )\n\u001b[1;32m   1795\u001b[0m \u001b[38;5;28;01mfor\u001b[39;00m file_id, file_path \u001b[38;5;129;01min\u001b[39;00m vocab_files\u001b[38;5;241m.\u001b[39mitems():\n\u001b[1;32m   1796\u001b[0m     \u001b[38;5;28;01mif\u001b[39;00m file_id \u001b[38;5;129;01mnot\u001b[39;00m \u001b[38;5;129;01min\u001b[39;00m resolved_vocab_files:\n",
      "\u001b[0;31mOSError\u001b[0m: Can't load tokenizer for 'bert-base-uncased'. If you were trying to load it from 'https://huggingface.co/models', make sure you don't have a local directory with the same name. Otherwise, make sure 'bert-base-uncased' is the correct path to a directory containing all relevant files for a BertTokenizer tokenizer."
     ]
    }
   ],
   "source": [
    "obj = sm.SummaryDetector(mydict)\n",
    "summary_model, summary_vis_processors = obj.load_model(model_type=\"base\")\n",
    "# summary_model, summary_vis_processors = mutils.load_model(\"large\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "metadata": {
    "execution": {
     "iopub.execute_input": "2023-08-16T08:18:56.287693Z",
     "iopub.status.busy": "2023-08-16T08:18:56.287116Z",
     "iopub.status.idle": "2023-08-16T08:19:42.077722Z",
     "shell.execute_reply": "2023-08-16T08:19:42.076650Z"
    },
    "tags": []
   },
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "'(ReadTimeoutError(\"HTTPSConnectionPool(host='huggingface.co', port=443): Read timed out. (read timeout=10)\"), '(Request ID: 24f208f6-43d7-47b6-8107-73577c6b2438)')' thrown while requesting HEAD https://huggingface.co/bert-base-uncased/resolve/main/vocab.txt\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "'(ReadTimeoutError(\"HTTPSConnectionPool(host='huggingface.co', port=443): Read timed out. (read timeout=10)\"), '(Request ID: fd6ca06d-4f6b-47bb-8cf4-948c3ca85a08)')' thrown while requesting HEAD https://huggingface.co/bert-base-uncased/resolve/main/added_tokens.json\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "'(ReadTimeoutError(\"HTTPSConnectionPool(host='huggingface.co', port=443): Read timed out. (read timeout=10)\"), '(Request ID: 8c8fd2ef-98fd-4aa6-8695-7ce208b547de)')' thrown while requesting HEAD https://huggingface.co/bert-base-uncased/resolve/main/special_tokens_map.json\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "'(ReadTimeoutError(\"HTTPSConnectionPool(host='huggingface.co', port=443): Read timed out. (read timeout=10)\"), '(Request ID: 03ed1cf4-5e3e-4f84-a85d-019584a19d86)')' thrown while requesting HEAD https://huggingface.co/bert-base-uncased/resolve/main/tokenizer_config.json\n"
     ]
    },
    {
     "ename": "OSError",
     "evalue": "Can't load tokenizer for 'bert-base-uncased'. If you were trying to load it from 'https://huggingface.co/models', make sure you don't have a local directory with the same name. Otherwise, make sure 'bert-base-uncased' is the correct path to a directory containing all relevant files for a BertTokenizer tokenizer.",
     "output_type": "error",
     "traceback": [
      "\u001b[0;31m---------------------------------------------------------------------------\u001b[0m",
      "\u001b[0;31mOSError\u001b[0m                                   Traceback (most recent call last)",
      "Cell \u001b[0;32mIn[6], line 2\u001b[0m\n\u001b[1;32m      1\u001b[0m \u001b[38;5;28;01mfor\u001b[39;00m key \u001b[38;5;129;01min\u001b[39;00m mydict:\n\u001b[0;32m----> 2\u001b[0m     mydict[key] \u001b[38;5;241m=\u001b[39m \u001b[43msm\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mSummaryDetector\u001b[49m\u001b[43m(\u001b[49m\u001b[43mmydict\u001b[49m\u001b[43m[\u001b[49m\u001b[43mkey\u001b[49m\u001b[43m]\u001b[49m\u001b[43m)\u001b[49m\u001b[38;5;241m.\u001b[39manalyse_image(\n\u001b[1;32m      3\u001b[0m         summary_model\u001b[38;5;241m=\u001b[39msummary_model, summary_vis_processors\u001b[38;5;241m=\u001b[39msummary_vis_processors\n\u001b[1;32m      4\u001b[0m     )\n",
      "File \u001b[0;32m~/work/AMMICO/AMMICO/ammico/summary.py:72\u001b[0m, in \u001b[0;36mSummaryDetector.__init__\u001b[0;34m(self, subdict, summary_model_type, analysis_type, list_of_questions, summary_model, summary_vis_processors, summary_vqa_model, summary_vqa_vis_processors, summary_vqa_txt_processors)\u001b[0m\n\u001b[1;32m     66\u001b[0m     \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39mlist_of_questions \u001b[38;5;241m=\u001b[39m list_of_questions\n\u001b[1;32m     67\u001b[0m \u001b[38;5;28;01mif\u001b[39;00m (\n\u001b[1;32m     68\u001b[0m     (summary_model \u001b[38;5;129;01mis\u001b[39;00m \u001b[38;5;28;01mNone\u001b[39;00m)\n\u001b[1;32m     69\u001b[0m     \u001b[38;5;129;01mand\u001b[39;00m (summary_vis_processors \u001b[38;5;129;01mis\u001b[39;00m \u001b[38;5;28;01mNone\u001b[39;00m)\n\u001b[1;32m     70\u001b[0m     \u001b[38;5;129;01mand\u001b[39;00m (analysis_type \u001b[38;5;241m!=\u001b[39m \u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mquestions\u001b[39m\u001b[38;5;124m\"\u001b[39m)\n\u001b[1;32m     71\u001b[0m ):\n\u001b[0;32m---> 72\u001b[0m     \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39msummary_model, \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39msummary_vis_processors \u001b[38;5;241m=\u001b[39m \u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mload_model\u001b[49m\u001b[43m(\u001b[49m\n\u001b[1;32m     73\u001b[0m \u001b[43m        \u001b[49m\u001b[43mmodel_type\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43msummary_model_type\u001b[49m\n\u001b[1;32m     74\u001b[0m \u001b[43m    \u001b[49m\u001b[43m)\u001b[49m\n\u001b[1;32m     75\u001b[0m \u001b[38;5;28;01melse\u001b[39;00m:\n\u001b[1;32m     76\u001b[0m     \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39msummary_model \u001b[38;5;241m=\u001b[39m summary_model\n",
      "File \u001b[0;32m~/work/AMMICO/AMMICO/ammico/summary.py:145\u001b[0m, in \u001b[0;36mSummaryDetector.load_model\u001b[0;34m(self, model_type)\u001b[0m\n\u001b[1;32m    131\u001b[0m \u001b[38;5;250m\u001b[39m\u001b[38;5;124;03m\"\"\"\u001b[39;00m\n\u001b[1;32m    132\u001b[0m \u001b[38;5;124;03mLoad blip_caption model and preprocessors for visual inputs from lavis.models.\u001b[39;00m\n\u001b[1;32m    133\u001b[0m \n\u001b[0;32m   (...)\u001b[0m\n\u001b[1;32m    139\u001b[0m \u001b[38;5;124;03m    vis_processors (dict): preprocessors for visual inputs.\u001b[39;00m\n\u001b[1;32m    140\u001b[0m \u001b[38;5;124;03m\"\"\"\u001b[39;00m\n\u001b[1;32m    141\u001b[0m select_model \u001b[38;5;241m=\u001b[39m {\n\u001b[1;32m    142\u001b[0m     \u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mbase\u001b[39m\u001b[38;5;124m\"\u001b[39m: SummaryDetector\u001b[38;5;241m.\u001b[39mload_model_base,\n\u001b[1;32m    143\u001b[0m     \u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mlarge\u001b[39m\u001b[38;5;124m\"\u001b[39m: SummaryDetector\u001b[38;5;241m.\u001b[39mload_model_large,\n\u001b[1;32m    144\u001b[0m }\n\u001b[0;32m--> 145\u001b[0m summary_model, summary_vis_processors \u001b[38;5;241m=\u001b[39m \u001b[43mselect_model\u001b[49m\u001b[43m[\u001b[49m\u001b[43mmodel_type\u001b[49m\u001b[43m]\u001b[49m\u001b[43m(\u001b[49m\u001b[38;5;28;43mself\u001b[39;49m\u001b[43m)\u001b[49m\n\u001b[1;32m    146\u001b[0m \u001b[38;5;28;01mreturn\u001b[39;00m summary_model, summary_vis_processors\n",
      "File \u001b[0;32m~/work/AMMICO/AMMICO/ammico/summary.py:104\u001b[0m, in \u001b[0;36mSummaryDetector.load_model_base\u001b[0;34m(self)\u001b[0m\n\u001b[1;32m     94\u001b[0m \u001b[38;5;28;01mdef\u001b[39;00m \u001b[38;5;21mload_model_base\u001b[39m(\u001b[38;5;28mself\u001b[39m):\n\u001b[1;32m     95\u001b[0m \u001b[38;5;250m    \u001b[39m\u001b[38;5;124;03m\"\"\"\u001b[39;00m\n\u001b[1;32m     96\u001b[0m \u001b[38;5;124;03m    Load base_coco blip_caption model and preprocessors for visual inputs from lavis.models.\u001b[39;00m\n\u001b[1;32m     97\u001b[0m \n\u001b[0;32m   (...)\u001b[0m\n\u001b[1;32m    102\u001b[0m \u001b[38;5;124;03m        vis_processors (dict): preprocessors for visual inputs.\u001b[39;00m\n\u001b[1;32m    103\u001b[0m \u001b[38;5;124;03m    \"\"\"\u001b[39;00m\n\u001b[0;32m--> 104\u001b[0m     summary_model, summary_vis_processors, _ \u001b[38;5;241m=\u001b[39m \u001b[43mload_model_and_preprocess\u001b[49m\u001b[43m(\u001b[49m\n\u001b[1;32m    105\u001b[0m \u001b[43m        \u001b[49m\u001b[43mname\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[38;5;124;43mblip_caption\u001b[39;49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[43m,\u001b[49m\n\u001b[1;32m    106\u001b[0m \u001b[43m        \u001b[49m\u001b[43mmodel_type\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[38;5;124;43mbase_coco\u001b[39;49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[43m,\u001b[49m\n\u001b[1;32m    107\u001b[0m \u001b[43m        \u001b[49m\u001b[43mis_eval\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[38;5;28;43;01mTrue\u001b[39;49;00m\u001b[43m,\u001b[49m\n\u001b[1;32m    108\u001b[0m \u001b[43m        \u001b[49m\u001b[43mdevice\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43msummary_device\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m    109\u001b[0m \u001b[43m    \u001b[49m\u001b[43m)\u001b[49m\n\u001b[1;32m    110\u001b[0m     \u001b[38;5;28;01mreturn\u001b[39;00m summary_model, summary_vis_processors\n",
      "File \u001b[0;32m/opt/hostedtoolcache/Python/3.9.17/x64/lib/python3.9/site-packages/lavis/models/__init__.py:195\u001b[0m, in \u001b[0;36mload_model_and_preprocess\u001b[0;34m(name, model_type, is_eval, device)\u001b[0m\n\u001b[1;32m    192\u001b[0m model_cls \u001b[38;5;241m=\u001b[39m registry\u001b[38;5;241m.\u001b[39mget_model_class(name)\n\u001b[1;32m    194\u001b[0m \u001b[38;5;66;03m# load model\u001b[39;00m\n\u001b[0;32m--> 195\u001b[0m model \u001b[38;5;241m=\u001b[39m \u001b[43mmodel_cls\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mfrom_pretrained\u001b[49m\u001b[43m(\u001b[49m\u001b[43mmodel_type\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mmodel_type\u001b[49m\u001b[43m)\u001b[49m\n\u001b[1;32m    197\u001b[0m \u001b[38;5;28;01mif\u001b[39;00m is_eval:\n\u001b[1;32m    198\u001b[0m     model\u001b[38;5;241m.\u001b[39meval()\n",
      "File \u001b[0;32m/opt/hostedtoolcache/Python/3.9.17/x64/lib/python3.9/site-packages/lavis/models/base_model.py:70\u001b[0m, in \u001b[0;36mBaseModel.from_pretrained\u001b[0;34m(cls, model_type)\u001b[0m\n\u001b[1;32m     60\u001b[0m \u001b[38;5;250m\u001b[39m\u001b[38;5;124;03m\"\"\"\u001b[39;00m\n\u001b[1;32m     61\u001b[0m \u001b[38;5;124;03mBuild a pretrained model from default configuration file, specified by model_type.\u001b[39;00m\n\u001b[1;32m     62\u001b[0m \n\u001b[0;32m   (...)\u001b[0m\n\u001b[1;32m     67\u001b[0m \u001b[38;5;124;03m    - model (nn.Module): pretrained or finetuned model, depending on the configuration.\u001b[39;00m\n\u001b[1;32m     68\u001b[0m \u001b[38;5;124;03m\"\"\"\u001b[39;00m\n\u001b[1;32m     69\u001b[0m model_cfg \u001b[38;5;241m=\u001b[39m OmegaConf\u001b[38;5;241m.\u001b[39mload(\u001b[38;5;28mcls\u001b[39m\u001b[38;5;241m.\u001b[39mdefault_config_path(model_type))\u001b[38;5;241m.\u001b[39mmodel\n\u001b[0;32m---> 70\u001b[0m model \u001b[38;5;241m=\u001b[39m \u001b[38;5;28;43mcls\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mfrom_config\u001b[49m\u001b[43m(\u001b[49m\u001b[43mmodel_cfg\u001b[49m\u001b[43m)\u001b[49m\n\u001b[1;32m     72\u001b[0m \u001b[38;5;28;01mreturn\u001b[39;00m model\n",
      "File \u001b[0;32m/opt/hostedtoolcache/Python/3.9.17/x64/lib/python3.9/site-packages/lavis/models/blip_models/blip_caption.py:216\u001b[0m, in \u001b[0;36mBlipCaption.from_config\u001b[0;34m(cls, cfg)\u001b[0m\n\u001b[1;32m    213\u001b[0m prompt \u001b[38;5;241m=\u001b[39m cfg\u001b[38;5;241m.\u001b[39mget(\u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mprompt\u001b[39m\u001b[38;5;124m\"\u001b[39m, \u001b[38;5;28;01mNone\u001b[39;00m)\n\u001b[1;32m    214\u001b[0m max_txt_len \u001b[38;5;241m=\u001b[39m cfg\u001b[38;5;241m.\u001b[39mget(\u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mmax_txt_len\u001b[39m\u001b[38;5;124m\"\u001b[39m, \u001b[38;5;241m40\u001b[39m)\n\u001b[0;32m--> 216\u001b[0m model \u001b[38;5;241m=\u001b[39m \u001b[38;5;28;43mcls\u001b[39;49m\u001b[43m(\u001b[49m\u001b[43mimage_encoder\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mtext_decoder\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mprompt\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mprompt\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mmax_txt_len\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mmax_txt_len\u001b[49m\u001b[43m)\u001b[49m\n\u001b[1;32m    217\u001b[0m model\u001b[38;5;241m.\u001b[39mload_checkpoint_from_config(cfg)\n\u001b[1;32m    219\u001b[0m \u001b[38;5;28;01mreturn\u001b[39;00m model\n",
      "File \u001b[0;32m/opt/hostedtoolcache/Python/3.9.17/x64/lib/python3.9/site-packages/lavis/models/blip_models/blip_caption.py:43\u001b[0m, in \u001b[0;36mBlipCaption.__init__\u001b[0;34m(self, image_encoder, text_decoder, prompt, max_txt_len)\u001b[0m\n\u001b[1;32m     40\u001b[0m \u001b[38;5;28;01mdef\u001b[39;00m \u001b[38;5;21m__init__\u001b[39m(\u001b[38;5;28mself\u001b[39m, image_encoder, text_decoder, prompt\u001b[38;5;241m=\u001b[39m\u001b[38;5;28;01mNone\u001b[39;00m, max_txt_len\u001b[38;5;241m=\u001b[39m\u001b[38;5;241m40\u001b[39m):\n\u001b[1;32m     41\u001b[0m     \u001b[38;5;28msuper\u001b[39m()\u001b[38;5;241m.\u001b[39m\u001b[38;5;21m__init__\u001b[39m()\n\u001b[0;32m---> 43\u001b[0m     \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39mtokenizer \u001b[38;5;241m=\u001b[39m \u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43minit_tokenizer\u001b[49m\u001b[43m(\u001b[49m\u001b[43m)\u001b[49m\n\u001b[1;32m     45\u001b[0m     \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39mvisual_encoder \u001b[38;5;241m=\u001b[39m image_encoder\n\u001b[1;32m     46\u001b[0m     \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39mtext_decoder \u001b[38;5;241m=\u001b[39m text_decoder\n",
      "File \u001b[0;32m/opt/hostedtoolcache/Python/3.9.17/x64/lib/python3.9/site-packages/lavis/models/blip_models/blip.py:22\u001b[0m, in \u001b[0;36mBlipBase.init_tokenizer\u001b[0;34m(cls)\u001b[0m\n\u001b[1;32m     20\u001b[0m \u001b[38;5;129m@classmethod\u001b[39m\n\u001b[1;32m     21\u001b[0m \u001b[38;5;28;01mdef\u001b[39;00m \u001b[38;5;21minit_tokenizer\u001b[39m(\u001b[38;5;28mcls\u001b[39m):\n\u001b[0;32m---> 22\u001b[0m     tokenizer \u001b[38;5;241m=\u001b[39m \u001b[43mBertTokenizer\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mfrom_pretrained\u001b[49m\u001b[43m(\u001b[49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[38;5;124;43mbert-base-uncased\u001b[39;49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[43m)\u001b[49m\n\u001b[1;32m     23\u001b[0m     tokenizer\u001b[38;5;241m.\u001b[39madd_special_tokens({\u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mbos_token\u001b[39m\u001b[38;5;124m\"\u001b[39m: \u001b[38;5;124m\"\u001b[39m\u001b[38;5;124m[DEC]\u001b[39m\u001b[38;5;124m\"\u001b[39m})\n\u001b[1;32m     24\u001b[0m     tokenizer\u001b[38;5;241m.\u001b[39madd_special_tokens({\u001b[38;5;124m\"\u001b[39m\u001b[38;5;124madditional_special_tokens\u001b[39m\u001b[38;5;124m\"\u001b[39m: [\u001b[38;5;124m\"\u001b[39m\u001b[38;5;124m[ENC]\u001b[39m\u001b[38;5;124m\"\u001b[39m]})\n",
      "File \u001b[0;32m/opt/hostedtoolcache/Python/3.9.17/x64/lib/python3.9/site-packages/transformers/tokenization_utils_base.py:1788\u001b[0m, in \u001b[0;36mPreTrainedTokenizerBase.from_pretrained\u001b[0;34m(cls, pretrained_model_name_or_path, *init_inputs, **kwargs)\u001b[0m\n\u001b[1;32m   1782\u001b[0m     logger\u001b[38;5;241m.\u001b[39minfo(\n\u001b[1;32m   1783\u001b[0m         \u001b[38;5;124mf\u001b[39m\u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mCan\u001b[39m\u001b[38;5;124m'\u001b[39m\u001b[38;5;124mt load following files from cache: \u001b[39m\u001b[38;5;132;01m{\u001b[39;00munresolved_files\u001b[38;5;132;01m}\u001b[39;00m\u001b[38;5;124m and cannot check if these \u001b[39m\u001b[38;5;124m\"\u001b[39m\n\u001b[1;32m   1784\u001b[0m         \u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mfiles are necessary for the tokenizer to operate.\u001b[39m\u001b[38;5;124m\"\u001b[39m\n\u001b[1;32m   1785\u001b[0m     )\n\u001b[1;32m   1787\u001b[0m \u001b[38;5;28;01mif\u001b[39;00m \u001b[38;5;28mall\u001b[39m(full_file_name \u001b[38;5;129;01mis\u001b[39;00m \u001b[38;5;28;01mNone\u001b[39;00m \u001b[38;5;28;01mfor\u001b[39;00m full_file_name \u001b[38;5;129;01min\u001b[39;00m resolved_vocab_files\u001b[38;5;241m.\u001b[39mvalues()):\n\u001b[0;32m-> 1788\u001b[0m     \u001b[38;5;28;01mraise\u001b[39;00m \u001b[38;5;167;01mEnvironmentError\u001b[39;00m(\n\u001b[1;32m   1789\u001b[0m         \u001b[38;5;124mf\u001b[39m\u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mCan\u001b[39m\u001b[38;5;124m'\u001b[39m\u001b[38;5;124mt load tokenizer for \u001b[39m\u001b[38;5;124m'\u001b[39m\u001b[38;5;132;01m{\u001b[39;00mpretrained_model_name_or_path\u001b[38;5;132;01m}\u001b[39;00m\u001b[38;5;124m'\u001b[39m\u001b[38;5;124m. If you were trying to load it from \u001b[39m\u001b[38;5;124m\"\u001b[39m\n\u001b[1;32m   1790\u001b[0m         \u001b[38;5;124m\"\u001b[39m\u001b[38;5;124m'\u001b[39m\u001b[38;5;124mhttps://huggingface.co/models\u001b[39m\u001b[38;5;124m'\u001b[39m\u001b[38;5;124m, make sure you don\u001b[39m\u001b[38;5;124m'\u001b[39m\u001b[38;5;124mt have a local directory with the same name. \u001b[39m\u001b[38;5;124m\"\u001b[39m\n\u001b[1;32m   1791\u001b[0m         \u001b[38;5;124mf\u001b[39m\u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mOtherwise, make sure \u001b[39m\u001b[38;5;124m'\u001b[39m\u001b[38;5;132;01m{\u001b[39;00mpretrained_model_name_or_path\u001b[38;5;132;01m}\u001b[39;00m\u001b[38;5;124m'\u001b[39m\u001b[38;5;124m is the correct path to a directory \u001b[39m\u001b[38;5;124m\"\u001b[39m\n\u001b[1;32m   1792\u001b[0m         \u001b[38;5;124mf\u001b[39m\u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mcontaining all relevant files for a \u001b[39m\u001b[38;5;132;01m{\u001b[39;00m\u001b[38;5;28mcls\u001b[39m\u001b[38;5;241m.\u001b[39m\u001b[38;5;18m__name__\u001b[39m\u001b[38;5;132;01m}\u001b[39;00m\u001b[38;5;124m tokenizer.\u001b[39m\u001b[38;5;124m\"\u001b[39m\n\u001b[1;32m   1793\u001b[0m     )\n\u001b[1;32m   1795\u001b[0m \u001b[38;5;28;01mfor\u001b[39;00m file_id, file_path \u001b[38;5;129;01min\u001b[39;00m vocab_files\u001b[38;5;241m.\u001b[39mitems():\n\u001b[1;32m   1796\u001b[0m     \u001b[38;5;28;01mif\u001b[39;00m file_id \u001b[38;5;129;01mnot\u001b[39;00m \u001b[38;5;129;01min\u001b[39;00m resolved_vocab_files:\n",
      "\u001b[0;31mOSError\u001b[0m: Can't load tokenizer for 'bert-base-uncased'. If you were trying to load it from 'https://huggingface.co/models', make sure you don't have a local directory with the same name. Otherwise, make sure 'bert-base-uncased' is the correct path to a directory containing all relevant files for a BertTokenizer tokenizer."
     ]
    }
   ],
   "source": [
    "for key in mydict:\n",
    "    mydict[key] = sm.SummaryDetector(mydict[key]).analyse_image(\n",
    "        summary_model=summary_model, summary_vis_processors=summary_vis_processors\n",
    "    )"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {
    "tags": []
   },
   "source": [
    "Convert the dictionary of dictionarys into a dictionary with lists:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "metadata": {
    "execution": {
     "iopub.execute_input": "2023-08-16T08:19:42.081783Z",
     "iopub.status.busy": "2023-08-16T08:19:42.081187Z",
     "iopub.status.idle": "2023-08-16T08:19:42.086886Z",
     "shell.execute_reply": "2023-08-16T08:19:42.086091Z"
    },
    "tags": []
   },
   "outputs": [],
   "source": [
    "outdict = mutils.append_data_to_dict(mydict)\n",
    "df = mutils.dump_df(outdict)"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Check the dataframe:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "metadata": {
    "execution": {
     "iopub.execute_input": "2023-08-16T08:19:42.090468Z",
     "iopub.status.busy": "2023-08-16T08:19:42.090027Z",
     "iopub.status.idle": "2023-08-16T08:19:42.103920Z",
     "shell.execute_reply": "2023-08-16T08:19:42.103167Z"
    },
    "tags": []
   },
   "outputs": [
    {
     "data": {
      "text/html": [
       "
\n",
       "\n",
       "
\n",
       "  \n",
       "    \n",
       "       | \n",
       "      filename | \n",
       "    
\n",
       "  \n",
       "  \n",
       "    \n",
       "      | 0 | \n",
       "      data/102730_eng.png | \n",
       "    
\n",
       "    \n",
       "      | 1 | \n",
       "      data/102141_2_eng.png | \n",
       "    
\n",
       "    \n",
       "      | 2 | \n",
       "      data/106349S_por.png | \n",
       "    
\n",
       "  \n",
       "
\n",
       "
 "
      ],
      "text/plain": [
       "                filename\n",
       "0    data/102730_eng.png\n",
       "1  data/102141_2_eng.png\n",
       "2   data/106349S_por.png"
      ]
     },
     "execution_count": 8,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "df.head(10)"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Write the csv file:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 9,
   "metadata": {
    "execution": {
     "iopub.execute_input": "2023-08-16T08:19:42.107564Z",
     "iopub.status.busy": "2023-08-16T08:19:42.107039Z",
     "iopub.status.idle": "2023-08-16T08:19:42.113075Z",
     "shell.execute_reply": "2023-08-16T08:19:42.112369Z"
    }
   },
   "outputs": [],
   "source": [
    "df.to_csv(\"data_out.csv\")"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Manually inspect the summaries\n",
    "\n",
    "To check the analysis, you can inspect the analyzed elements here. Loading the results takes a moment, so please be patient. If you are sure of what you are doing.\n",
    "\n",
    "`const_image_summary` - the permanent summarys, which does not change from run to run (analyse_image).\n",
    "\n",
    "`3_non-deterministic summary` - 3 different summarys examples that change from run to run (analyse_image). "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 10,
   "metadata": {
    "execution": {
     "iopub.execute_input": "2023-08-16T08:19:42.116597Z",
     "iopub.status.busy": "2023-08-16T08:19:42.116084Z",
     "iopub.status.idle": "2023-08-16T08:19:42.161738Z",
     "shell.execute_reply": "2023-08-16T08:19:42.160695Z"
    },
    "tags": []
   },
   "outputs": [
    {
     "ename": "TypeError",
     "evalue": "__init__() got an unexpected keyword argument 'identify'",
     "output_type": "error",
     "traceback": [
      "\u001b[0;31m---------------------------------------------------------------------------\u001b[0m",
      "\u001b[0;31mTypeError\u001b[0m                                 Traceback (most recent call last)",
      "Cell \u001b[0;32mIn[10], line 1\u001b[0m\n\u001b[0;32m----> 1\u001b[0m analysis_explorer \u001b[38;5;241m=\u001b[39m \u001b[43mmdisplay\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mAnalysisExplorer\u001b[49m\u001b[43m(\u001b[49m\u001b[43mmydict\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43midentify\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[38;5;124;43msummary\u001b[39;49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[43m)\u001b[49m\n\u001b[1;32m      2\u001b[0m analysis_explorer\u001b[38;5;241m.\u001b[39mrun_server(port\u001b[38;5;241m=\u001b[39m\u001b[38;5;241m8055\u001b[39m)\n",
      "\u001b[0;31mTypeError\u001b[0m: __init__() got an unexpected keyword argument 'identify'"
     ]
    }
   ],
   "source": [
    "analysis_explorer = mdisplay.AnalysisExplorer(mydict, identify=\"summary\")\n",
    "analysis_explorer.run_server(port=8055)"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Generate answers to free-form questions about images written in natural language. "
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Set the list of questions as a list of strings:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 11,
   "metadata": {
    "execution": {
     "iopub.execute_input": "2023-08-16T08:19:42.166650Z",
     "iopub.status.busy": "2023-08-16T08:19:42.166100Z",
     "iopub.status.idle": "2023-08-16T08:19:42.170133Z",
     "shell.execute_reply": "2023-08-16T08:19:42.169322Z"
    }
   },
   "outputs": [],
   "source": [
    "list_of_questions = [\n",
    "    \"How many persons on the picture?\",\n",
    "    \"Are there any politicians in the picture?\",\n",
    "    \"Does the picture show something from medicine?\",\n",
    "]"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Explore the analysis using the interface:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 12,
   "metadata": {
    "execution": {
     "iopub.execute_input": "2023-08-16T08:19:42.173965Z",
     "iopub.status.busy": "2023-08-16T08:19:42.173452Z",
     "iopub.status.idle": "2023-08-16T08:19:42.216389Z",
     "shell.execute_reply": "2023-08-16T08:19:42.215383Z"
    }
   },
   "outputs": [
    {
     "ename": "TypeError",
     "evalue": "__init__() got an unexpected keyword argument 'identify'",
     "output_type": "error",
     "traceback": [
      "\u001b[0;31m---------------------------------------------------------------------------\u001b[0m",
      "\u001b[0;31mTypeError\u001b[0m                                 Traceback (most recent call last)",
      "Cell \u001b[0;32mIn[12], line 1\u001b[0m\n\u001b[0;32m----> 1\u001b[0m analysis_explorer \u001b[38;5;241m=\u001b[39m \u001b[43mmdisplay\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mAnalysisExplorer\u001b[49m\u001b[43m(\u001b[49m\u001b[43mmydict\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43midentify\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[38;5;124;43msummary\u001b[39;49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[43m)\u001b[49m\n\u001b[1;32m      2\u001b[0m analysis_explorer\u001b[38;5;241m.\u001b[39mrun_server(port\u001b[38;5;241m=\u001b[39m\u001b[38;5;241m8055\u001b[39m)\n",
      "\u001b[0;31mTypeError\u001b[0m: __init__() got an unexpected keyword argument 'identify'"
     ]
    }
   ],
   "source": [
    "analysis_explorer = mdisplay.AnalysisExplorer(mydict, identify=\"summary\")\n",
    "analysis_explorer.run_server(port=8055)"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Or directly analyze for further processing\n",
    "Instead of inspecting each of the images, you can also directly carry out the analysis and export the result into a csv. This may take a while depending on how many images you have loaded."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 13,
   "metadata": {
    "execution": {
     "iopub.execute_input": "2023-08-16T08:19:42.221098Z",
     "iopub.status.busy": "2023-08-16T08:19:42.220813Z",
     "iopub.status.idle": "2023-08-16T08:20:28.368650Z",
     "shell.execute_reply": "2023-08-16T08:20:28.367041Z"
    }
   },
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "'(ReadTimeoutError(\"HTTPSConnectionPool(host='huggingface.co', port=443): Read timed out. (read timeout=10)\"), '(Request ID: fea2d868-d5bc-417d-9e00-4db1e10f2a39)')' thrown while requesting HEAD https://huggingface.co/bert-base-uncased/resolve/main/vocab.txt\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "'(ReadTimeoutError(\"HTTPSConnectionPool(host='huggingface.co', port=443): Read timed out. (read timeout=10)\"), '(Request ID: f0ccba03-f0e5-43ae-be4b-91ed53e31894)')' thrown while requesting HEAD https://huggingface.co/bert-base-uncased/resolve/main/added_tokens.json\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "'(ReadTimeoutError(\"HTTPSConnectionPool(host='huggingface.co', port=443): Read timed out. (read timeout=10)\"), '(Request ID: b28cdc86-997c-466d-ad54-a6022a60c715)')' thrown while requesting HEAD https://huggingface.co/bert-base-uncased/resolve/main/special_tokens_map.json\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "'(ReadTimeoutError(\"HTTPSConnectionPool(host='huggingface.co', port=443): Read timed out. (read timeout=10)\"), '(Request ID: 115b5242-dca3-428c-a002-00eadbc77c47)')' thrown while requesting HEAD https://huggingface.co/bert-base-uncased/resolve/main/tokenizer_config.json\n"
     ]
    },
    {
     "ename": "OSError",
     "evalue": "Can't load tokenizer for 'bert-base-uncased'. If you were trying to load it from 'https://huggingface.co/models', make sure you don't have a local directory with the same name. Otherwise, make sure 'bert-base-uncased' is the correct path to a directory containing all relevant files for a BertTokenizer tokenizer.",
     "output_type": "error",
     "traceback": [
      "\u001b[0;31m---------------------------------------------------------------------------\u001b[0m",
      "\u001b[0;31mOSError\u001b[0m                                   Traceback (most recent call last)",
      "Cell \u001b[0;32mIn[13], line 2\u001b[0m\n\u001b[1;32m      1\u001b[0m \u001b[38;5;28;01mfor\u001b[39;00m key \u001b[38;5;129;01min\u001b[39;00m mydict:\n\u001b[0;32m----> 2\u001b[0m     mydict[key] \u001b[38;5;241m=\u001b[39m \u001b[43msm\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mSummaryDetector\u001b[49m\u001b[43m(\u001b[49m\u001b[43mmydict\u001b[49m\u001b[43m[\u001b[49m\u001b[43mkey\u001b[49m\u001b[43m]\u001b[49m\u001b[43m)\u001b[49m\u001b[38;5;241m.\u001b[39manalyse_questions(list_of_questions)\n",
      "File \u001b[0;32m~/work/AMMICO/AMMICO/ammico/summary.py:72\u001b[0m, in \u001b[0;36mSummaryDetector.__init__\u001b[0;34m(self, subdict, summary_model_type, analysis_type, list_of_questions, summary_model, summary_vis_processors, summary_vqa_model, summary_vqa_vis_processors, summary_vqa_txt_processors)\u001b[0m\n\u001b[1;32m     66\u001b[0m     \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39mlist_of_questions \u001b[38;5;241m=\u001b[39m list_of_questions\n\u001b[1;32m     67\u001b[0m \u001b[38;5;28;01mif\u001b[39;00m (\n\u001b[1;32m     68\u001b[0m     (summary_model \u001b[38;5;129;01mis\u001b[39;00m \u001b[38;5;28;01mNone\u001b[39;00m)\n\u001b[1;32m     69\u001b[0m     \u001b[38;5;129;01mand\u001b[39;00m (summary_vis_processors \u001b[38;5;129;01mis\u001b[39;00m \u001b[38;5;28;01mNone\u001b[39;00m)\n\u001b[1;32m     70\u001b[0m     \u001b[38;5;129;01mand\u001b[39;00m (analysis_type \u001b[38;5;241m!=\u001b[39m \u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mquestions\u001b[39m\u001b[38;5;124m\"\u001b[39m)\n\u001b[1;32m     71\u001b[0m ):\n\u001b[0;32m---> 72\u001b[0m     \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39msummary_model, \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39msummary_vis_processors \u001b[38;5;241m=\u001b[39m \u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mload_model\u001b[49m\u001b[43m(\u001b[49m\n\u001b[1;32m     73\u001b[0m \u001b[43m        \u001b[49m\u001b[43mmodel_type\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43msummary_model_type\u001b[49m\n\u001b[1;32m     74\u001b[0m \u001b[43m    \u001b[49m\u001b[43m)\u001b[49m\n\u001b[1;32m     75\u001b[0m \u001b[38;5;28;01melse\u001b[39;00m:\n\u001b[1;32m     76\u001b[0m     \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39msummary_model \u001b[38;5;241m=\u001b[39m summary_model\n",
      "File \u001b[0;32m~/work/AMMICO/AMMICO/ammico/summary.py:145\u001b[0m, in \u001b[0;36mSummaryDetector.load_model\u001b[0;34m(self, model_type)\u001b[0m\n\u001b[1;32m    131\u001b[0m \u001b[38;5;250m\u001b[39m\u001b[38;5;124;03m\"\"\"\u001b[39;00m\n\u001b[1;32m    132\u001b[0m \u001b[38;5;124;03mLoad blip_caption model and preprocessors for visual inputs from lavis.models.\u001b[39;00m\n\u001b[1;32m    133\u001b[0m \n\u001b[0;32m   (...)\u001b[0m\n\u001b[1;32m    139\u001b[0m \u001b[38;5;124;03m    vis_processors (dict): preprocessors for visual inputs.\u001b[39;00m\n\u001b[1;32m    140\u001b[0m \u001b[38;5;124;03m\"\"\"\u001b[39;00m\n\u001b[1;32m    141\u001b[0m select_model \u001b[38;5;241m=\u001b[39m {\n\u001b[1;32m    142\u001b[0m     \u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mbase\u001b[39m\u001b[38;5;124m\"\u001b[39m: SummaryDetector\u001b[38;5;241m.\u001b[39mload_model_base,\n\u001b[1;32m    143\u001b[0m     \u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mlarge\u001b[39m\u001b[38;5;124m\"\u001b[39m: SummaryDetector\u001b[38;5;241m.\u001b[39mload_model_large,\n\u001b[1;32m    144\u001b[0m }\n\u001b[0;32m--> 145\u001b[0m summary_model, summary_vis_processors \u001b[38;5;241m=\u001b[39m \u001b[43mselect_model\u001b[49m\u001b[43m[\u001b[49m\u001b[43mmodel_type\u001b[49m\u001b[43m]\u001b[49m\u001b[43m(\u001b[49m\u001b[38;5;28;43mself\u001b[39;49m\u001b[43m)\u001b[49m\n\u001b[1;32m    146\u001b[0m \u001b[38;5;28;01mreturn\u001b[39;00m summary_model, summary_vis_processors\n",
      "File \u001b[0;32m~/work/AMMICO/AMMICO/ammico/summary.py:104\u001b[0m, in \u001b[0;36mSummaryDetector.load_model_base\u001b[0;34m(self)\u001b[0m\n\u001b[1;32m     94\u001b[0m \u001b[38;5;28;01mdef\u001b[39;00m \u001b[38;5;21mload_model_base\u001b[39m(\u001b[38;5;28mself\u001b[39m):\n\u001b[1;32m     95\u001b[0m \u001b[38;5;250m    \u001b[39m\u001b[38;5;124;03m\"\"\"\u001b[39;00m\n\u001b[1;32m     96\u001b[0m \u001b[38;5;124;03m    Load base_coco blip_caption model and preprocessors for visual inputs from lavis.models.\u001b[39;00m\n\u001b[1;32m     97\u001b[0m \n\u001b[0;32m   (...)\u001b[0m\n\u001b[1;32m    102\u001b[0m \u001b[38;5;124;03m        vis_processors (dict): preprocessors for visual inputs.\u001b[39;00m\n\u001b[1;32m    103\u001b[0m \u001b[38;5;124;03m    \"\"\"\u001b[39;00m\n\u001b[0;32m--> 104\u001b[0m     summary_model, summary_vis_processors, _ \u001b[38;5;241m=\u001b[39m \u001b[43mload_model_and_preprocess\u001b[49m\u001b[43m(\u001b[49m\n\u001b[1;32m    105\u001b[0m \u001b[43m        \u001b[49m\u001b[43mname\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[38;5;124;43mblip_caption\u001b[39;49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[43m,\u001b[49m\n\u001b[1;32m    106\u001b[0m \u001b[43m        \u001b[49m\u001b[43mmodel_type\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[38;5;124;43mbase_coco\u001b[39;49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[43m,\u001b[49m\n\u001b[1;32m    107\u001b[0m \u001b[43m        \u001b[49m\u001b[43mis_eval\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[38;5;28;43;01mTrue\u001b[39;49;00m\u001b[43m,\u001b[49m\n\u001b[1;32m    108\u001b[0m \u001b[43m        \u001b[49m\u001b[43mdevice\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43msummary_device\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m    109\u001b[0m \u001b[43m    \u001b[49m\u001b[43m)\u001b[49m\n\u001b[1;32m    110\u001b[0m     \u001b[38;5;28;01mreturn\u001b[39;00m summary_model, summary_vis_processors\n",
      "File \u001b[0;32m/opt/hostedtoolcache/Python/3.9.17/x64/lib/python3.9/site-packages/lavis/models/__init__.py:195\u001b[0m, in \u001b[0;36mload_model_and_preprocess\u001b[0;34m(name, model_type, is_eval, device)\u001b[0m\n\u001b[1;32m    192\u001b[0m model_cls \u001b[38;5;241m=\u001b[39m registry\u001b[38;5;241m.\u001b[39mget_model_class(name)\n\u001b[1;32m    194\u001b[0m \u001b[38;5;66;03m# load model\u001b[39;00m\n\u001b[0;32m--> 195\u001b[0m model \u001b[38;5;241m=\u001b[39m \u001b[43mmodel_cls\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mfrom_pretrained\u001b[49m\u001b[43m(\u001b[49m\u001b[43mmodel_type\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mmodel_type\u001b[49m\u001b[43m)\u001b[49m\n\u001b[1;32m    197\u001b[0m \u001b[38;5;28;01mif\u001b[39;00m is_eval:\n\u001b[1;32m    198\u001b[0m     model\u001b[38;5;241m.\u001b[39meval()\n",
      "File \u001b[0;32m/opt/hostedtoolcache/Python/3.9.17/x64/lib/python3.9/site-packages/lavis/models/base_model.py:70\u001b[0m, in \u001b[0;36mBaseModel.from_pretrained\u001b[0;34m(cls, model_type)\u001b[0m\n\u001b[1;32m     60\u001b[0m \u001b[38;5;250m\u001b[39m\u001b[38;5;124;03m\"\"\"\u001b[39;00m\n\u001b[1;32m     61\u001b[0m \u001b[38;5;124;03mBuild a pretrained model from default configuration file, specified by model_type.\u001b[39;00m\n\u001b[1;32m     62\u001b[0m \n\u001b[0;32m   (...)\u001b[0m\n\u001b[1;32m     67\u001b[0m \u001b[38;5;124;03m    - model (nn.Module): pretrained or finetuned model, depending on the configuration.\u001b[39;00m\n\u001b[1;32m     68\u001b[0m \u001b[38;5;124;03m\"\"\"\u001b[39;00m\n\u001b[1;32m     69\u001b[0m model_cfg \u001b[38;5;241m=\u001b[39m OmegaConf\u001b[38;5;241m.\u001b[39mload(\u001b[38;5;28mcls\u001b[39m\u001b[38;5;241m.\u001b[39mdefault_config_path(model_type))\u001b[38;5;241m.\u001b[39mmodel\n\u001b[0;32m---> 70\u001b[0m model \u001b[38;5;241m=\u001b[39m \u001b[38;5;28;43mcls\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mfrom_config\u001b[49m\u001b[43m(\u001b[49m\u001b[43mmodel_cfg\u001b[49m\u001b[43m)\u001b[49m\n\u001b[1;32m     72\u001b[0m \u001b[38;5;28;01mreturn\u001b[39;00m model\n",
      "File \u001b[0;32m/opt/hostedtoolcache/Python/3.9.17/x64/lib/python3.9/site-packages/lavis/models/blip_models/blip_caption.py:216\u001b[0m, in \u001b[0;36mBlipCaption.from_config\u001b[0;34m(cls, cfg)\u001b[0m\n\u001b[1;32m    213\u001b[0m prompt \u001b[38;5;241m=\u001b[39m cfg\u001b[38;5;241m.\u001b[39mget(\u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mprompt\u001b[39m\u001b[38;5;124m\"\u001b[39m, \u001b[38;5;28;01mNone\u001b[39;00m)\n\u001b[1;32m    214\u001b[0m max_txt_len \u001b[38;5;241m=\u001b[39m cfg\u001b[38;5;241m.\u001b[39mget(\u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mmax_txt_len\u001b[39m\u001b[38;5;124m\"\u001b[39m, \u001b[38;5;241m40\u001b[39m)\n\u001b[0;32m--> 216\u001b[0m model \u001b[38;5;241m=\u001b[39m \u001b[38;5;28;43mcls\u001b[39;49m\u001b[43m(\u001b[49m\u001b[43mimage_encoder\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mtext_decoder\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mprompt\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mprompt\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mmax_txt_len\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mmax_txt_len\u001b[49m\u001b[43m)\u001b[49m\n\u001b[1;32m    217\u001b[0m model\u001b[38;5;241m.\u001b[39mload_checkpoint_from_config(cfg)\n\u001b[1;32m    219\u001b[0m \u001b[38;5;28;01mreturn\u001b[39;00m model\n",
      "File \u001b[0;32m/opt/hostedtoolcache/Python/3.9.17/x64/lib/python3.9/site-packages/lavis/models/blip_models/blip_caption.py:43\u001b[0m, in \u001b[0;36mBlipCaption.__init__\u001b[0;34m(self, image_encoder, text_decoder, prompt, max_txt_len)\u001b[0m\n\u001b[1;32m     40\u001b[0m \u001b[38;5;28;01mdef\u001b[39;00m \u001b[38;5;21m__init__\u001b[39m(\u001b[38;5;28mself\u001b[39m, image_encoder, text_decoder, prompt\u001b[38;5;241m=\u001b[39m\u001b[38;5;28;01mNone\u001b[39;00m, max_txt_len\u001b[38;5;241m=\u001b[39m\u001b[38;5;241m40\u001b[39m):\n\u001b[1;32m     41\u001b[0m     \u001b[38;5;28msuper\u001b[39m()\u001b[38;5;241m.\u001b[39m\u001b[38;5;21m__init__\u001b[39m()\n\u001b[0;32m---> 43\u001b[0m     \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39mtokenizer \u001b[38;5;241m=\u001b[39m \u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43minit_tokenizer\u001b[49m\u001b[43m(\u001b[49m\u001b[43m)\u001b[49m\n\u001b[1;32m     45\u001b[0m     \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39mvisual_encoder \u001b[38;5;241m=\u001b[39m image_encoder\n\u001b[1;32m     46\u001b[0m     \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39mtext_decoder \u001b[38;5;241m=\u001b[39m text_decoder\n",
      "File \u001b[0;32m/opt/hostedtoolcache/Python/3.9.17/x64/lib/python3.9/site-packages/lavis/models/blip_models/blip.py:22\u001b[0m, in \u001b[0;36mBlipBase.init_tokenizer\u001b[0;34m(cls)\u001b[0m\n\u001b[1;32m     20\u001b[0m \u001b[38;5;129m@classmethod\u001b[39m\n\u001b[1;32m     21\u001b[0m \u001b[38;5;28;01mdef\u001b[39;00m \u001b[38;5;21minit_tokenizer\u001b[39m(\u001b[38;5;28mcls\u001b[39m):\n\u001b[0;32m---> 22\u001b[0m     tokenizer \u001b[38;5;241m=\u001b[39m \u001b[43mBertTokenizer\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mfrom_pretrained\u001b[49m\u001b[43m(\u001b[49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[38;5;124;43mbert-base-uncased\u001b[39;49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[43m)\u001b[49m\n\u001b[1;32m     23\u001b[0m     tokenizer\u001b[38;5;241m.\u001b[39madd_special_tokens({\u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mbos_token\u001b[39m\u001b[38;5;124m\"\u001b[39m: \u001b[38;5;124m\"\u001b[39m\u001b[38;5;124m[DEC]\u001b[39m\u001b[38;5;124m\"\u001b[39m})\n\u001b[1;32m     24\u001b[0m     tokenizer\u001b[38;5;241m.\u001b[39madd_special_tokens({\u001b[38;5;124m\"\u001b[39m\u001b[38;5;124madditional_special_tokens\u001b[39m\u001b[38;5;124m\"\u001b[39m: [\u001b[38;5;124m\"\u001b[39m\u001b[38;5;124m[ENC]\u001b[39m\u001b[38;5;124m\"\u001b[39m]})\n",
      "File \u001b[0;32m/opt/hostedtoolcache/Python/3.9.17/x64/lib/python3.9/site-packages/transformers/tokenization_utils_base.py:1788\u001b[0m, in \u001b[0;36mPreTrainedTokenizerBase.from_pretrained\u001b[0;34m(cls, pretrained_model_name_or_path, *init_inputs, **kwargs)\u001b[0m\n\u001b[1;32m   1782\u001b[0m     logger\u001b[38;5;241m.\u001b[39minfo(\n\u001b[1;32m   1783\u001b[0m         \u001b[38;5;124mf\u001b[39m\u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mCan\u001b[39m\u001b[38;5;124m'\u001b[39m\u001b[38;5;124mt load following files from cache: \u001b[39m\u001b[38;5;132;01m{\u001b[39;00munresolved_files\u001b[38;5;132;01m}\u001b[39;00m\u001b[38;5;124m and cannot check if these \u001b[39m\u001b[38;5;124m\"\u001b[39m\n\u001b[1;32m   1784\u001b[0m         \u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mfiles are necessary for the tokenizer to operate.\u001b[39m\u001b[38;5;124m\"\u001b[39m\n\u001b[1;32m   1785\u001b[0m     )\n\u001b[1;32m   1787\u001b[0m \u001b[38;5;28;01mif\u001b[39;00m \u001b[38;5;28mall\u001b[39m(full_file_name \u001b[38;5;129;01mis\u001b[39;00m \u001b[38;5;28;01mNone\u001b[39;00m \u001b[38;5;28;01mfor\u001b[39;00m full_file_name \u001b[38;5;129;01min\u001b[39;00m resolved_vocab_files\u001b[38;5;241m.\u001b[39mvalues()):\n\u001b[0;32m-> 1788\u001b[0m     \u001b[38;5;28;01mraise\u001b[39;00m \u001b[38;5;167;01mEnvironmentError\u001b[39;00m(\n\u001b[1;32m   1789\u001b[0m         \u001b[38;5;124mf\u001b[39m\u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mCan\u001b[39m\u001b[38;5;124m'\u001b[39m\u001b[38;5;124mt load tokenizer for \u001b[39m\u001b[38;5;124m'\u001b[39m\u001b[38;5;132;01m{\u001b[39;00mpretrained_model_name_or_path\u001b[38;5;132;01m}\u001b[39;00m\u001b[38;5;124m'\u001b[39m\u001b[38;5;124m. If you were trying to load it from \u001b[39m\u001b[38;5;124m\"\u001b[39m\n\u001b[1;32m   1790\u001b[0m         \u001b[38;5;124m\"\u001b[39m\u001b[38;5;124m'\u001b[39m\u001b[38;5;124mhttps://huggingface.co/models\u001b[39m\u001b[38;5;124m'\u001b[39m\u001b[38;5;124m, make sure you don\u001b[39m\u001b[38;5;124m'\u001b[39m\u001b[38;5;124mt have a local directory with the same name. \u001b[39m\u001b[38;5;124m\"\u001b[39m\n\u001b[1;32m   1791\u001b[0m         \u001b[38;5;124mf\u001b[39m\u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mOtherwise, make sure \u001b[39m\u001b[38;5;124m'\u001b[39m\u001b[38;5;132;01m{\u001b[39;00mpretrained_model_name_or_path\u001b[38;5;132;01m}\u001b[39;00m\u001b[38;5;124m'\u001b[39m\u001b[38;5;124m is the correct path to a directory \u001b[39m\u001b[38;5;124m\"\u001b[39m\n\u001b[1;32m   1792\u001b[0m         \u001b[38;5;124mf\u001b[39m\u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mcontaining all relevant files for a \u001b[39m\u001b[38;5;132;01m{\u001b[39;00m\u001b[38;5;28mcls\u001b[39m\u001b[38;5;241m.\u001b[39m\u001b[38;5;18m__name__\u001b[39m\u001b[38;5;132;01m}\u001b[39;00m\u001b[38;5;124m tokenizer.\u001b[39m\u001b[38;5;124m\"\u001b[39m\n\u001b[1;32m   1793\u001b[0m     )\n\u001b[1;32m   1795\u001b[0m \u001b[38;5;28;01mfor\u001b[39;00m file_id, file_path \u001b[38;5;129;01min\u001b[39;00m vocab_files\u001b[38;5;241m.\u001b[39mitems():\n\u001b[1;32m   1796\u001b[0m     \u001b[38;5;28;01mif\u001b[39;00m file_id \u001b[38;5;129;01mnot\u001b[39;00m \u001b[38;5;129;01min\u001b[39;00m resolved_vocab_files:\n",
      "\u001b[0;31mOSError\u001b[0m: Can't load tokenizer for 'bert-base-uncased'. If you were trying to load it from 'https://huggingface.co/models', make sure you don't have a local directory with the same name. Otherwise, make sure 'bert-base-uncased' is the correct path to a directory containing all relevant files for a BertTokenizer tokenizer."
     ]
    }
   ],
   "source": [
    "for key in mydict:\n",
    "    mydict[key] = sm.SummaryDetector(mydict[key]).analyse_questions(list_of_questions)"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Convert to dataframe and write csv\n",
    "These steps are required to convert the dictionary of dictionarys into a dictionary with lists, that can be converted into a pandas dataframe and exported to a csv file."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 14,
   "metadata": {
    "execution": {
     "iopub.execute_input": "2023-08-16T08:20:28.373313Z",
     "iopub.status.busy": "2023-08-16T08:20:28.372714Z",
     "iopub.status.idle": "2023-08-16T08:20:28.377439Z",
     "shell.execute_reply": "2023-08-16T08:20:28.376588Z"
    }
   },
   "outputs": [],
   "source": [
    "outdict2 = mutils.append_data_to_dict(mydict)\n",
    "df2 = mutils.dump_df(outdict2)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 15,
   "metadata": {
    "execution": {
     "iopub.execute_input": "2023-08-16T08:20:28.381355Z",
     "iopub.status.busy": "2023-08-16T08:20:28.380897Z",
     "iopub.status.idle": "2023-08-16T08:20:28.388838Z",
     "shell.execute_reply": "2023-08-16T08:20:28.388009Z"
    }
   },
   "outputs": [
    {
     "data": {
      "text/html": [
       "\n",
       "\n",
       "
\n",
       "  \n",
       "    \n",
       "       | \n",
       "      filename | \n",
       "    
\n",
       "  \n",
       "  \n",
       "    \n",
       "      | 0 | \n",
       "      data/102730_eng.png | \n",
       "    
\n",
       "    \n",
       "      | 1 | \n",
       "      data/102141_2_eng.png | \n",
       "    
\n",
       "    \n",
       "      | 2 | \n",
       "      data/106349S_por.png | \n",
       "    
\n",
       "  \n",
       "
\n",
       "
 "
      ],
      "text/plain": [
       "                filename\n",
       "0    data/102730_eng.png\n",
       "1  data/102141_2_eng.png\n",
       "2   data/106349S_por.png"
      ]
     },
     "execution_count": 15,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "df2.head(10)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 16,
   "metadata": {
    "execution": {
     "iopub.execute_input": "2023-08-16T08:20:28.393391Z",
     "iopub.status.busy": "2023-08-16T08:20:28.392929Z",
     "iopub.status.idle": "2023-08-16T08:20:28.397857Z",
     "shell.execute_reply": "2023-08-16T08:20:28.397054Z"
    }
   },
   "outputs": [],
   "source": [
    "df2.to_csv(\"data_out2.csv\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3 (ipykernel)",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.9.17"
  },
  "vscode": {
   "interpreter": {
    "hash": "f1142466f556ab37fe2d38e2897a16796906208adb09fea90ba58bdf8a56f0ba"
   }
  }
 },
 "nbformat": 4,
 "nbformat_minor": 4
}