diff --git a/FAQ.md b/FAQ.md index caffea1c..b78b90b7 100644 --- a/FAQ.md +++ b/FAQ.md @@ -98,7 +98,9 @@ Some features of ammico require internet access; a general answer to this questi Due to well documented biases in the detection of minorities with computer vision tools, and to the ethical implications of such detection, these parts of the tool are not directly made available to users. To access these capabilities, users must first agree with a ethical disclosure statement that reads: "DeepFace and RetinaFace provide wrappers to trained models in face recognition and emotion detection. Age, gender and race/ethnicity models were trained on the backbone of VGG-Face with transfer learning. -ETHICAL DISCLOSURE STATEMENT: + +ETHICAL DISCLOSURE STATEMENT: + The Emotion Detector uses DeepFace and RetinaFace to probabilistically assess the gender, age and race of the detected faces. Such assessments may not reflect how the individuals identify. Additionally, the classification is carried out in simplistic categories and contains only the most basic classes (for example, “male” and “female” for gender, and seven non-overlapping categories for ethnicity). To access these probabilistic assessments, you must therefore agree with the following statement: “I understand the ethical and privacy implications such assessments have for the interpretation of the results and that this analysis may result in personal and possibly sensitive data, and I wish to proceed.” This disclosure statement is included as a separate line of code early in the flow of the Emotion Detector. Once the user has agreed with the statement, further data analyses will also include these assessments. diff --git a/README.md b/README.md index dda4f78f..787e5584 100644 --- a/README.md +++ b/README.md @@ -39,22 +39,22 @@ The `AMMICO` package can be installed using pip: ``` pip install ammico ``` -This will install the package and its dependencies locally. If after installation you get some errors when running some modules, please follow the instructions in the [FAQ](FAQ.md). +This will install the package and its dependencies locally. If after installation you get some errors when running some modules, please follow the instructions in the [FAQ](https://ssciwr.github.io/AMMICO/build/html/faq_link.html). ## Usage -The main demonstration notebook can be found in the `notebooks` folder and also on google colab: [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)]. +The main demonstration notebook can be found in the `notebooks` folder and also on google colab: [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/ssciwr/ammico/blob/main/ammico/notebooks/DemoNotebook_ammico.ipynb). There are further sample notebooks in the `notebooks` folder for the more experimental features: 1. Topic analysis: Use the notebook `get-text-from-image.ipynb` to analyse the topics of the extraced text.\ -**You can run this notebook on google colab: [![Open In Colab](https://colab.research.google.com/github/ssciwr/ammico/blob/main/ammico/notebooks/get-text-from-image.ipynb)** +**You can run this notebook on google colab: [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/ssciwr/ammico/blob/main/ammico/notebooks/get-text-from-image.ipynb)** Place the data files and google cloud vision API key in your google drive to access the data. 1. To crop social media posts use the `cropposts.ipynb` notebook. -**You can run this notebook on google colab: [![Open In Colab](https://colab.research.google.com/github/ssciwr/ammico/blob/main/ammico/notebooks/cropposts.ipynb)** +**You can run this notebook on google colab: [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/ssciwr/ammico/blob/main/ammico/notebooks/cropposts.ipynb)** ## Features ### Text extraction -The text is extracted from the images using [google-cloud-vision](https://cloud.google.com/vision). For this, you need an API key. Set up your google account following the instructions on the google Vision AI website or as described [here](docs/source/set_up_credentials.md). +The text is extracted from the images using [google-cloud-vision](https://cloud.google.com/vision). For this, you need an API key. Set up your google account following the instructions on the google Vision AI website or as described [here](https://ssciwr.github.io/AMMICO/build/html/create_API_key_link.html). You then need to export the location of the API key as an environment variable: ``` export GOOGLE_APPLICATION_CREDENTIALS="location of your .json" diff --git a/ammico/multimodal_search.py b/ammico/multimodal_search.py index ece96c41..864a59cd 100644 --- a/ammico/multimodal_search.py +++ b/ammico/multimodal_search.py @@ -287,7 +287,7 @@ def load_tensors(self, name: str) -> torch.Tensor: Returns: features_image_stacked (torch.Tensor): tensors of images features. """ - features_image_stacked = torch.load(name) + features_image_stacked = torch.load(name, weights_only=True) return features_image_stacked def extract_text_features(self, model, text_input: str) -> torch.Tensor: diff --git a/docs/source/conf.py b/docs/source/conf.py index 047d47bb..c8d1d6bd 100644 --- a/docs/source/conf.py +++ b/docs/source/conf.py @@ -15,7 +15,7 @@ project = "AMMICO" copyright = "2022, Scientific Software Center, Heidelberg University" author = "Scientific Software Center, Heidelberg University" -release = "0.0.1" +release = "0.2.2" # -- General configuration --------------------------------------------------- # https://www.sphinx-doc.org/en/master/usage/configuration.html#general-configuration @@ -31,7 +31,7 @@ "github_user": "ssciwr", # Username "github_repo": "AMMICO", # Repo name "github_version": "main", # Version - "conf_py_path": "/source/", # Path in the checkout to the docs root + "conf_py_path": "/docs/source/", # Path in the checkout to the docs root } templates_path = ["_templates"] diff --git a/pyproject.toml b/pyproject.toml index 717887c3..33644819 100644 --- a/pyproject.toml +++ b/pyproject.toml @@ -4,7 +4,7 @@ build-backend = "hatchling.build" [project] name = "ammico" -version = "0.2.1" +version = "0.2.2" description = "AI Media and Misinformation Content Analysis Tool" readme = "README.md" maintainers = [