зеркало из
https://github.com/ssciwr/AMMICO.git
synced 2025-10-29 13:06:04 +02:00
[pre-commit.ci] pre-commit autoupdate (#184)
* [pre-commit.ci] pre-commit autoupdate updates: - [github.com/kynan/nbstripout: 0.6.1 → 0.7.1](https://github.com/kynan/nbstripout/compare/0.6.1...0.7.1) * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci --------- Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Этот коммит содержится в:
родитель
b4aae9321c
Коммит
fcb2d55740
@ -1,6 +1,6 @@
|
||||
repos:
|
||||
- repo: https://github.com/kynan/nbstripout
|
||||
rev: 0.6.1
|
||||
rev: 0.7.1
|
||||
hooks:
|
||||
- id: nbstripout
|
||||
files: ".ipynb"
|
||||
|
||||
@ -2,7 +2,7 @@
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "b25986d7",
|
||||
"id": "0",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Crop posts from social media posts images"
|
||||
@ -10,7 +10,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "c8a5a491",
|
||||
"id": "1",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Crop posts from social media posts images, to keep import text informations from social media posts images.\n",
|
||||
@ -20,7 +20,7 @@
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "70ffb7e2",
|
||||
"id": "2",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
@ -51,7 +51,7 @@
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "5ae02c45",
|
||||
"id": "3",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
@ -64,7 +64,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "e7b8127f",
|
||||
"id": "4",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"The cropping is carried out by finding reference images on the image to be cropped. If a reference matches a region on the image, then everything below the matched region is removed. Manually look at a reference and an example post with the code below."
|
||||
@ -73,7 +73,7 @@
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "d04d0e86",
|
||||
"id": "5",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
@ -96,7 +96,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "49a11f61",
|
||||
"id": "6",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"You can now crop the image and check on the way that everything looks fine. `plt_match` will plot the matches on the image and below which line content will be cropped; `plt_crop` will plot the cropped text part of the social media post with the comments removed; `plt_image` will plot the image part of the social media post if applicable."
|
||||
@ -105,7 +105,7 @@
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "71850d9d",
|
||||
"id": "7",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
@ -119,7 +119,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "1929e549",
|
||||
"id": "8",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Batch crop images from the image folder given in `crop_dir`. The cropped images will save in `save_crop_dir` folder with the same file name as the original file. The reference images with the items to match are provided in `ref_dir`.\n",
|
||||
@ -130,7 +130,7 @@
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "eef89291",
|
||||
"id": "9",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
|
||||
@ -3,7 +3,7 @@
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"id": "d2c4d40d-8aca-4024-8d19-a65c4efe825d",
|
||||
"id": "0",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Facial Expression recognition with DeepFace"
|
||||
@ -12,7 +12,7 @@
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"id": "51f8888b-d1a3-4b85-a596-95c0993fa192",
|
||||
"id": "1",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Facial expressions can be detected using [DeepFace](https://github.com/serengil/deepface) and [RetinaFace](https://github.com/serengil/retinaface).\n",
|
||||
@ -25,7 +25,7 @@
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "50c1c1c7",
|
||||
"id": "2",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
@ -50,7 +50,7 @@
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "b21e52a5-d379-42db-aae6-f2ab9ed9a369",
|
||||
"id": "3",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
@ -60,7 +60,7 @@
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"id": "a2bd2153",
|
||||
"id": "4",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"We select a subset of image files to try facial expression detection on, see the `limit` keyword. The `find_files` function finds image files within a given directory and initialize the main dictionary that contains all information for the images and is updated through each subsequent analysis::"
|
||||
@ -69,7 +69,7 @@
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "afe7e638-f09d-47e7-9295-1c374bd64c53",
|
||||
"id": "5",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
@ -84,7 +84,7 @@
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"id": "a9372561",
|
||||
"id": "6",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"To check the analysis, you can inspect the analyzed elements here. Loading the results takes a moment, so please be patient. If you are sure of what you are doing, you can skip this and directly export a csv file in the step below.\n",
|
||||
@ -94,7 +94,7 @@
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "992499ed-33f1-4425-ad5d-738cf565d175",
|
||||
"id": "7",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
@ -105,7 +105,7 @@
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"id": "6f974341",
|
||||
"id": "8",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Instead of inspecting each of the images, you can also directly carry out the analysis and export the result into a csv. This may take a while depending on how many images you have loaded."
|
||||
@ -114,7 +114,7 @@
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "6f97c7d0",
|
||||
"id": "9",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
@ -125,7 +125,7 @@
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"id": "174357b1",
|
||||
"id": "10",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"These steps are required to convert the dictionary of dictionarys into a dictionary with lists, that can be converted into a pandas dataframe and exported to a csv file."
|
||||
@ -134,7 +134,7 @@
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "604bd257",
|
||||
"id": "11",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
@ -144,7 +144,7 @@
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"id": "8373d9f8",
|
||||
"id": "12",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Check the dataframe:"
|
||||
@ -153,7 +153,7 @@
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "aa4b518a",
|
||||
"id": "13",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
@ -163,7 +163,7 @@
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"id": "579cd59f",
|
||||
"id": "14",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Write the csv file - here you should provide a file path and file name for the csv file to be written."
|
||||
@ -172,7 +172,7 @@
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "4618decb",
|
||||
"id": "15",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
|
||||
@ -2,7 +2,7 @@
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "dcaa3da1",
|
||||
"id": "0",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Notebook for text extraction on image\n",
|
||||
@ -28,7 +28,7 @@
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "f43f327c",
|
||||
"id": "1",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
@ -53,7 +53,7 @@
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "cf362e60",
|
||||
"id": "2",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
@ -63,7 +63,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "fddba721",
|
||||
"id": "3",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"We select a subset of image files to try the text extraction on, see the `limit` keyword. The `find_files` function finds image files within a given directory and initialize the main dictionary that contains all information for the images and is updated through each subsequent analysis: "
|
||||
@ -72,7 +72,7 @@
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "27675810",
|
||||
"id": "4",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
@ -87,7 +87,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "7b8b929f",
|
||||
"id": "5",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Google cloud vision API\n",
|
||||
@ -98,7 +98,7 @@
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "cbf74c0b-52fe-4fb8-b617-f18611e8f986",
|
||||
"id": "6",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
@ -109,7 +109,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "0891b795-c7fe-454c-a45d-45fadf788142",
|
||||
"id": "7",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Inspect the elements per image\n",
|
||||
@ -120,7 +120,7 @@
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "7c6ecc88",
|
||||
"id": "8",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
@ -130,7 +130,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "9c3e72b5-0e57-4019-b45e-3e36a74e7f52",
|
||||
"id": "9",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Or directly analyze for further processing\n",
|
||||
@ -140,7 +140,7 @@
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "365c78b1-7ff4-4213-86fa-6a0a2d05198f",
|
||||
"id": "10",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
@ -152,7 +152,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "3c063eda",
|
||||
"id": "11",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Convert to dataframe and write csv\n",
|
||||
@ -162,7 +162,7 @@
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "5709c2cd",
|
||||
"id": "12",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
@ -171,7 +171,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "ae182eb7",
|
||||
"id": "13",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Check the dataframe:"
|
||||
@ -180,7 +180,7 @@
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "c4f05637",
|
||||
"id": "14",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
@ -189,7 +189,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "eedf1e47",
|
||||
"id": "15",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Write the csv file - here you should provide a file path and file name for the csv file to be written."
|
||||
@ -198,7 +198,7 @@
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "bf6c9ddb",
|
||||
"id": "16",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
@ -208,7 +208,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "4bc8ac0a",
|
||||
"id": "17",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Topic analysis\n",
|
||||
@ -217,7 +217,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "4931941b",
|
||||
"id": "18",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"BERTopic takes a list of strings as input. The more items in the list, the better for the topic modeling. If the below returns an error for `analyse_topic()`, the reason can be that your dataset is too small.\n",
|
||||
@ -232,7 +232,7 @@
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "a3450a61",
|
||||
"id": "19",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
@ -244,7 +244,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "95667342",
|
||||
"id": "20",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Option 2: Read in a csv\n",
|
||||
@ -254,7 +254,7 @@
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "5530e436",
|
||||
"id": "21",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
@ -266,7 +266,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "0b6ef6d7",
|
||||
"id": "22",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Access frequent topics\n",
|
||||
@ -276,7 +276,7 @@
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "43288cda-61bb-4ff1-a209-dcfcc4916b1f",
|
||||
"id": "23",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
@ -285,7 +285,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "b3316770",
|
||||
"id": "24",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Get information for specific topic\n",
|
||||
@ -295,7 +295,7 @@
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "db14fe03",
|
||||
"id": "25",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
@ -305,7 +305,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "d10f701e",
|
||||
"id": "26",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Topic visualization\n",
|
||||
@ -315,7 +315,7 @@
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "2331afe6",
|
||||
"id": "27",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
@ -324,7 +324,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "f4eaf353",
|
||||
"id": "28",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Save the model\n",
|
||||
@ -334,7 +334,7 @@
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "e5e8377c",
|
||||
"id": "29",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
|
||||
@ -3,7 +3,7 @@
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"id": "22df2297-0629-45aa-b88c-6c61f1544db6",
|
||||
"id": "0",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Image Multimodal Search"
|
||||
@ -12,7 +12,7 @@
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"id": "9eeeb302-296e-48dc-86c7-254aa02f2b3a",
|
||||
"id": "1",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"This notebooks shows how to carry out an image multimodal search with the [LAVIS](https://github.com/salesforce/LAVIS) library. \n",
|
||||
@ -25,7 +25,7 @@
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "0b0a6bdf",
|
||||
"id": "2",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
@ -50,7 +50,7 @@
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "f10ad6c9-b1a0-4043-8c5d-ed660d77be37",
|
||||
"id": "3",
|
||||
"metadata": {
|
||||
"tags": []
|
||||
},
|
||||
@ -62,7 +62,7 @@
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "8d3fe589-ff3c-4575-b8f5-650db85596bc",
|
||||
"id": "4",
|
||||
"metadata": {
|
||||
"tags": []
|
||||
},
|
||||
@ -78,7 +78,7 @@
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"id": "987540a8-d800-4c70-a76b-7bfabaf123fa",
|
||||
"id": "5",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Indexing and extracting features from images in selected folder"
|
||||
@ -87,7 +87,7 @@
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"id": "66d6ede4-00bc-4aeb-9a36-e52d7de33fe5",
|
||||
"id": "6",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"First you need to select a model. You can choose one of the following models: \n",
|
||||
@ -102,7 +102,7 @@
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "7bbca1f0-d4b0-43cd-8e05-ee39d37c328e",
|
||||
"id": "7",
|
||||
"metadata": {
|
||||
"tags": []
|
||||
},
|
||||
@ -119,7 +119,7 @@
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"id": "357828c9",
|
||||
"id": "8",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"To process the loaded images using the selected model, use the below code:"
|
||||
@ -128,7 +128,7 @@
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "f6f2c9b1-4a91-47cb-86b5-2c9c67e4837b",
|
||||
"id": "9",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
@ -138,7 +138,7 @@
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "ca095404-57d0-4f5d-aeb0-38c232252b17",
|
||||
"id": "10",
|
||||
"metadata": {
|
||||
"tags": []
|
||||
},
|
||||
@ -160,7 +160,7 @@
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"id": "9ff8a894-566b-4c4f-acca-21c50b5b1f52",
|
||||
"id": "11",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"The images are then processed and stored in a numerical representation, a tensor. These tensors do not change for the same image and same model - so if you run this analysis once, and save the tensors giving a path with the keyword `path_to_save_tensors`, a file with filename `.<Number_of_images>_<model_name>_saved_features_image.pt` will be placed there.\n",
|
||||
@ -171,7 +171,7 @@
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "56c6d488-f093-4661-835a-5c73a329c874",
|
||||
"id": "12",
|
||||
"metadata": {
|
||||
"tags": []
|
||||
},
|
||||
@ -193,7 +193,7 @@
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"id": "309923c1-d6f8-4424-8fca-bde5f3a98b38",
|
||||
"id": "13",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Here we already processed our image folder with 5 images and the `clip_base` model. So you need just to write the name `5_clip_base_saved_features_image.pt` of the saved file that consists of tensors of all images as keyword argument for `path_to_load_tensors`. "
|
||||
@ -202,7 +202,7 @@
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"id": "162a52e8-6652-4897-b92e-645cab07aaef",
|
||||
"id": "14",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Formulate your search queries\n",
|
||||
@ -213,7 +213,7 @@
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "c4196a52-d01e-42e4-8674-5712f7d6f792",
|
||||
"id": "15",
|
||||
"metadata": {
|
||||
"tags": []
|
||||
},
|
||||
@ -233,7 +233,7 @@
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"id": "8bcf3127-3dfd-4ff4-b9e7-a043099b1418",
|
||||
"id": "16",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"You can filter your results in 3 different ways:\n",
|
||||
@ -245,7 +245,7 @@
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "7f7dc52f-7ee9-4590-96b7-e0d9d3b82378",
|
||||
"id": "17",
|
||||
"metadata": {
|
||||
"tags": []
|
||||
},
|
||||
@ -266,7 +266,7 @@
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "65210ca2-b674-44bd-807a-4165e14bad74",
|
||||
"id": "18",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
@ -276,7 +276,7 @@
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "557473df-e2b9-4ef0-9439-3daadf6741ac",
|
||||
"id": "19",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
@ -286,7 +286,7 @@
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"id": "e1cf7e46-0c2c-4fb2-b89a-ef585ccb9339",
|
||||
"id": "20",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"After launching `multimodal_search` function, the results of each query will be added to the source dictionary. "
|
||||
@ -295,7 +295,7 @@
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "c93d7e88-594d-4095-b5f2-7bf01210dc61",
|
||||
"id": "21",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
@ -305,7 +305,7 @@
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"id": "cd3ee120-8561-482b-a76a-e8f996783325",
|
||||
"id": "22",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"A special function was written to present the search results conveniently. "
|
||||
@ -314,7 +314,7 @@
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "4324e4fd-e9aa-4933-bb12-074d54e0c510",
|
||||
"id": "23",
|
||||
"metadata": {
|
||||
"tags": []
|
||||
},
|
||||
@ -328,7 +328,7 @@
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"id": "0b750e9f-fe64-4028-9caf-52d7187462f1",
|
||||
"id": "24",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Improve the search results\n",
|
||||
@ -339,7 +339,7 @@
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "b3af7b39-6d0d-4da3-9b8f-7dfd3f5779be",
|
||||
"id": "25",
|
||||
"metadata": {
|
||||
"tags": []
|
||||
},
|
||||
@ -353,7 +353,7 @@
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "caf1f4ae-4b37-4954-800e-7120f0419de5",
|
||||
"id": "26",
|
||||
"metadata": {
|
||||
"tags": []
|
||||
},
|
||||
@ -372,7 +372,7 @@
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"id": "9e98c150-5fab-4251-bce7-0d8fc7b385b9",
|
||||
"id": "27",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Then using the same output function you can add the `itm=True` argument to output the new image order. Remember that for images querys, an error will be thrown with `itm=True` argument. You can also add the `image_gradcam_with_itm` along with `itm=True` argument to output the heat maps of the calculated images."
|
||||
@ -381,7 +381,7 @@
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "6a829b99-5230-463a-8b11-30ffbb67fc3a",
|
||||
"id": "28",
|
||||
"metadata": {
|
||||
"tags": []
|
||||
},
|
||||
@ -395,7 +395,7 @@
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"id": "d86ab96b-1907-4b7f-a78e-3983b516d781",
|
||||
"id": "29",
|
||||
"metadata": {
|
||||
"tags": []
|
||||
},
|
||||
@ -406,7 +406,7 @@
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"id": "4bdbc4d4-695d-4751-ab7c-d2d98e2917d7",
|
||||
"id": "30",
|
||||
"metadata": {
|
||||
"tags": []
|
||||
},
|
||||
@ -417,7 +417,7 @@
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "6c6ddd83-bc87-48f2-a8d6-1bd3f4201ff7",
|
||||
"id": "31",
|
||||
"metadata": {
|
||||
"tags": []
|
||||
},
|
||||
@ -430,7 +430,7 @@
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"id": "ea2675d5-604c-45e7-86d2-080b1f4559a0",
|
||||
"id": "32",
|
||||
"metadata": {
|
||||
"tags": []
|
||||
},
|
||||
@ -441,7 +441,7 @@
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "e78646d6-80be-4d3e-8123-3360957bcaa8",
|
||||
"id": "33",
|
||||
"metadata": {
|
||||
"tags": []
|
||||
},
|
||||
@ -453,7 +453,7 @@
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"id": "05546d99-afab-4565-8f30-f14e1426abcf",
|
||||
"id": "34",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Write the csv file:"
|
||||
@ -462,7 +462,7 @@
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "185f7dde-20dc-44d8-9ab0-de41f9b5734d",
|
||||
"id": "35",
|
||||
"metadata": {
|
||||
"tags": []
|
||||
},
|
||||
|
||||
@ -3,7 +3,7 @@
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"id": "b25986d7",
|
||||
"id": "0",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Crop posts module"
|
||||
@ -12,7 +12,7 @@
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"id": "c8a5a491",
|
||||
"id": "1",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Crop posts from social media posts images, to keep import text informations from social media posts images.\n",
|
||||
@ -22,7 +22,7 @@
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "70ffb7e2",
|
||||
"id": "2",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
@ -51,7 +51,7 @@
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "5ae02c45",
|
||||
"id": "3",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
@ -66,7 +66,7 @@
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"id": "e7b8127f",
|
||||
"id": "4",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"The cropping is carried out by finding reference images on the image to be cropped. If a reference matches a region on the image, then everything below the matched region is removed. Manually look at a reference and an example post with the code below."
|
||||
@ -75,7 +75,7 @@
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "d04d0e86",
|
||||
"id": "5",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
@ -99,7 +99,7 @@
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"id": "49a11f61",
|
||||
"id": "6",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"You can now crop the image and check on the way that everything looks fine. `plt_match` will plot the matches on the image and below which line content will be cropped; `plt_crop` will plot the cropped text part of the social media post with the comments removed; `plt_image` will plot the image part of the social media post if applicable."
|
||||
@ -108,7 +108,7 @@
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "71850d9d",
|
||||
"id": "7",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
@ -123,7 +123,7 @@
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"id": "1929e549",
|
||||
"id": "8",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Batch crop images from the image folder given in `crop_dir`. The cropped images will save in `save_crop_dir` folder with the same file name as the original file. The reference images with the items to match are provided in `ref_dir`.\n",
|
||||
@ -134,7 +134,7 @@
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "eef89291",
|
||||
"id": "9",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
@ -153,7 +153,7 @@
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "b3b3c1ad",
|
||||
"id": "10",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": []
|
||||
|
||||
@ -3,7 +3,7 @@
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"id": "22df2297-0629-45aa-b88c-6c61f1544db6",
|
||||
"id": "0",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Multimodal search module"
|
||||
@ -12,7 +12,7 @@
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"id": "9eeeb302-296e-48dc-86c7-254aa02f2b3a",
|
||||
"id": "1",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"This notebooks shows how to carry out an image multimodal search with the [LAVIS](https://github.com/salesforce/LAVIS) library. \n",
|
||||
@ -25,7 +25,7 @@
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "0b0a6bdf",
|
||||
"id": "2",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
@ -48,7 +48,7 @@
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "9d58a23e",
|
||||
"id": "3",
|
||||
"metadata": {
|
||||
"nbsphinx": "hidden"
|
||||
},
|
||||
@ -63,7 +63,7 @@
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "f10ad6c9-b1a0-4043-8c5d-ed660d77be37",
|
||||
"id": "4",
|
||||
"metadata": {
|
||||
"tags": []
|
||||
},
|
||||
@ -76,7 +76,7 @@
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "8d3fe589-ff3c-4575-b8f5-650db85596bc",
|
||||
"id": "5",
|
||||
"metadata": {
|
||||
"tags": []
|
||||
},
|
||||
@ -91,7 +91,7 @@
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "a08bd3a9-e954-4a0e-ad64-6817abd3a25a",
|
||||
"id": "6",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
@ -101,7 +101,7 @@
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"id": "987540a8-d800-4c70-a76b-7bfabaf123fa",
|
||||
"id": "7",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Indexing and extracting features from images in selected folder"
|
||||
@ -110,7 +110,7 @@
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"id": "66d6ede4-00bc-4aeb-9a36-e52d7de33fe5",
|
||||
"id": "8",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"First you need to select a model. You can choose one of the following models: \n",
|
||||
@ -125,7 +125,7 @@
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "7bbca1f0-d4b0-43cd-8e05-ee39d37c328e",
|
||||
"id": "9",
|
||||
"metadata": {
|
||||
"tags": []
|
||||
},
|
||||
@ -142,7 +142,7 @@
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"id": "357828c9",
|
||||
"id": "10",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"To process the loaded images using the selected model, use the below code:"
|
||||
@ -151,7 +151,7 @@
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "f6f2c9b1-4a91-47cb-86b5-2c9c67e4837b",
|
||||
"id": "11",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
@ -161,7 +161,7 @@
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "16603ded-078e-4362-847b-57ad76829327",
|
||||
"id": "12",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
@ -171,7 +171,7 @@
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "ca095404-57d0-4f5d-aeb0-38c232252b17",
|
||||
"id": "13",
|
||||
"metadata": {
|
||||
"tags": []
|
||||
},
|
||||
@ -193,7 +193,7 @@
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "f236c3b1-c3a6-471a-9fc5-ef831b675286",
|
||||
"id": "14",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
@ -203,7 +203,7 @@
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"id": "9ff8a894-566b-4c4f-acca-21c50b5b1f52",
|
||||
"id": "15",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"The images are then processed and stored in a numerical representation, a tensor. These tensors do not change for the same image and same model - so if you run this analysis once, and save the tensors giving a path with the keyword `path_to_save_tensors`, a file with filename `.<Number_of_images>_<model_name>_saved_features_image.pt` will be placed there.\n",
|
||||
@ -214,7 +214,7 @@
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "56c6d488-f093-4661-835a-5c73a329c874",
|
||||
"id": "16",
|
||||
"metadata": {
|
||||
"tags": []
|
||||
},
|
||||
@ -236,7 +236,7 @@
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"id": "309923c1-d6f8-4424-8fca-bde5f3a98b38",
|
||||
"id": "17",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Here we already processed our image folder with 5 images and the `clip_base` model. So you need just to write the name `5_clip_base_saved_features_image.pt` of the saved file that consists of tensors of all images as keyword argument for `path_to_load_tensors`. "
|
||||
@ -245,7 +245,7 @@
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"id": "162a52e8-6652-4897-b92e-645cab07aaef",
|
||||
"id": "18",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Formulate your search queries\n",
|
||||
@ -256,7 +256,7 @@
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "c4196a52-d01e-42e4-8674-5712f7d6f792",
|
||||
"id": "19",
|
||||
"metadata": {
|
||||
"tags": []
|
||||
},
|
||||
@ -272,7 +272,7 @@
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"id": "8bcf3127-3dfd-4ff4-b9e7-a043099b1418",
|
||||
"id": "20",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"You can filter your results in 3 different ways:\n",
|
||||
@ -284,7 +284,7 @@
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "7f7dc52f-7ee9-4590-96b7-e0d9d3b82378",
|
||||
"id": "21",
|
||||
"metadata": {
|
||||
"tags": []
|
||||
},
|
||||
@ -305,7 +305,7 @@
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "65210ca2-b674-44bd-807a-4165e14bad74",
|
||||
"id": "22",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
@ -315,7 +315,7 @@
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "557473df-e2b9-4ef0-9439-3daadf6741ac",
|
||||
"id": "23",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
@ -325,7 +325,7 @@
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "c93d7e88-594d-4095-b5f2-7bf01210dc61",
|
||||
"id": "24",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
@ -335,7 +335,7 @@
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"id": "e1cf7e46-0c2c-4fb2-b89a-ef585ccb9339",
|
||||
"id": "25",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"After launching `multimodal_search` function, the results of each query will be added to the source dictionary. "
|
||||
@ -344,7 +344,7 @@
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "9ad74b21-6187-4a58-9ed8-fd3e80f5a4ed",
|
||||
"id": "26",
|
||||
"metadata": {
|
||||
"tags": []
|
||||
},
|
||||
@ -356,7 +356,7 @@
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"id": "cd3ee120-8561-482b-a76a-e8f996783325",
|
||||
"id": "27",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"A special function was written to present the search results conveniently. "
|
||||
@ -365,7 +365,7 @@
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "4324e4fd-e9aa-4933-bb12-074d54e0c510",
|
||||
"id": "28",
|
||||
"metadata": {
|
||||
"tags": []
|
||||
},
|
||||
@ -379,7 +379,7 @@
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"id": "0b750e9f-fe64-4028-9caf-52d7187462f1",
|
||||
"id": "29",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Improve the search results\n",
|
||||
@ -390,7 +390,7 @@
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "b3af7b39-6d0d-4da3-9b8f-7dfd3f5779be",
|
||||
"id": "30",
|
||||
"metadata": {
|
||||
"tags": []
|
||||
},
|
||||
@ -404,7 +404,7 @@
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "caf1f4ae-4b37-4954-800e-7120f0419de5",
|
||||
"id": "31",
|
||||
"metadata": {
|
||||
"tags": []
|
||||
},
|
||||
@ -423,7 +423,7 @@
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"id": "9e98c150-5fab-4251-bce7-0d8fc7b385b9",
|
||||
"id": "32",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Then using the same output function you can add the `ITM=True` arguments to output the new image order. You can also add the `image_gradcam_with_itm` argument to output the heat maps of the calculated images. "
|
||||
@ -432,7 +432,7 @@
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "6a829b99-5230-463a-8b11-30ffbb67fc3a",
|
||||
"id": "33",
|
||||
"metadata": {
|
||||
"tags": []
|
||||
},
|
||||
@ -446,7 +446,7 @@
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"id": "d86ab96b-1907-4b7f-a78e-3983b516d781",
|
||||
"id": "34",
|
||||
"metadata": {
|
||||
"tags": []
|
||||
},
|
||||
@ -457,7 +457,7 @@
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"id": "4bdbc4d4-695d-4751-ab7c-d2d98e2917d7",
|
||||
"id": "35",
|
||||
"metadata": {
|
||||
"tags": []
|
||||
},
|
||||
@ -468,7 +468,7 @@
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "6c6ddd83-bc87-48f2-a8d6-1bd3f4201ff7",
|
||||
"id": "36",
|
||||
"metadata": {
|
||||
"tags": []
|
||||
},
|
||||
@ -480,7 +480,7 @@
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"id": "ea2675d5-604c-45e7-86d2-080b1f4559a0",
|
||||
"id": "37",
|
||||
"metadata": {
|
||||
"tags": []
|
||||
},
|
||||
@ -491,7 +491,7 @@
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "e78646d6-80be-4d3e-8123-3360957bcaa8",
|
||||
"id": "38",
|
||||
"metadata": {
|
||||
"tags": []
|
||||
},
|
||||
@ -503,7 +503,7 @@
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"id": "05546d99-afab-4565-8f30-f14e1426abcf",
|
||||
"id": "39",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Write the csv file:"
|
||||
@ -512,7 +512,7 @@
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "185f7dde-20dc-44d8-9ab0-de41f9b5734d",
|
||||
"id": "40",
|
||||
"metadata": {
|
||||
"tags": []
|
||||
},
|
||||
@ -524,7 +524,7 @@
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "b6a79201-7c17-496c-a6a1-b8ecfd3dd1e8",
|
||||
"id": "41",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": []
|
||||
|
||||
Загрузка…
x
Ссылка в новой задаче
Block a user