Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

minor changes for revision 2 #212

Merged
merged 26 commits into from
Oct 8, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
26 commits
Select commit Hold shift + click to select a range
9fc7792
fix typos
iulusoy Sep 26, 2024
3acaed1
add buttons for google colab everywhere
iulusoy Sep 26, 2024
0564cfc
update readme, separate out FAQ
iulusoy Oct 4, 2024
c1fe64d
add privacy disclosure statement
iulusoy Oct 4, 2024
4a1003e
do not install using uv
iulusoy Oct 4, 2024
76627e6
update docs notebook
iulusoy Oct 4, 2024
599bdc0
explicit install of libopenblas
iulusoy Oct 4, 2024
a9215dd
explicit install of libopenblas
iulusoy Oct 4, 2024
f22ed60
explicit install of libopenblas
iulusoy Oct 4, 2024
7fc233f
try to get scipy installed using uv
iulusoy Oct 4, 2024
c47acf7
use ubuntu 24.04
iulusoy Oct 4, 2024
2e81c66
go back to pip
iulusoy Oct 4, 2024
3056f90
try with scipy only
iulusoy Oct 7, 2024
6625e52
try with a few others
iulusoy Oct 7, 2024
dbfbf79
use hatchling
iulusoy Oct 7, 2024
19beaaf
Merge branch 'main' into revision-2
iulusoy Oct 7, 2024
d17d440
wording changes, install all requirements
iulusoy Oct 7, 2024
89e9200
fix offending spacy version
iulusoy Oct 7, 2024
3d81095
run all tests
iulusoy Oct 7, 2024
de7ba83
include faq in documentation, fix link
iulusoy Oct 7, 2024
d09a246
make readme links point to documentation
iulusoy Oct 8, 2024
e7fb44c
load model safely
iulusoy Oct 8, 2024
45a7a52
correct edit on GH link and bump version
iulusoy Oct 8, 2024
0f17ccd
Merge branch 'main' into revision-2
iulusoy Oct 8, 2024
5bb4a64
remove comments
iulusoy Oct 8, 2024
380c2d7
Merge branch 'revision-2' of https://github.com/ssciwr/AMMICO into re…
iulusoy Oct 8, 2024
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 3 additions & 1 deletion FAQ.md
Original file line number Diff line number Diff line change
Expand Up @@ -98,7 +98,9 @@ Some features of ammico require internet access; a general answer to this questi
Due to well documented biases in the detection of minorities with computer vision tools, and to the ethical implications of such detection, these parts of the tool are not directly made available to users. To access these capabilities, users must first agree with a ethical disclosure statement that reads:

"DeepFace and RetinaFace provide wrappers to trained models in face recognition and emotion detection. Age, gender and race/ethnicity models were trained on the backbone of VGG-Face with transfer learning.
ETHICAL DISCLOSURE STATEMENT:

ETHICAL DISCLOSURE STATEMENT:

The Emotion Detector uses DeepFace and RetinaFace to probabilistically assess the gender, age and race of the detected faces. Such assessments may not reflect how the individuals identify. Additionally, the classification is carried out in simplistic categories and contains only the most basic classes (for example, “male” and “female” for gender, and seven non-overlapping categories for ethnicity). To access these probabilistic assessments, you must therefore agree with the following statement: “I understand the ethical and privacy implications such assessments have for the interpretation of the results and that this analysis may result in personal and possibly sensitive data, and I wish to proceed.”

This disclosure statement is included as a separate line of code early in the flow of the Emotion Detector. Once the user has agreed with the statement, further data analyses will also include these assessments.
10 changes: 5 additions & 5 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -39,22 +39,22 @@ The `AMMICO` package can be installed using pip:
```
pip install ammico
```
This will install the package and its dependencies locally. If after installation you get some errors when running some modules, please follow the instructions in the [FAQ](FAQ.md).
This will install the package and its dependencies locally. If after installation you get some errors when running some modules, please follow the instructions in the [FAQ](https://ssciwr.github.io/AMMICO/build/html/faq_link.html).

## Usage

The main demonstration notebook can be found in the `notebooks` folder and also on google colab: [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)].
The main demonstration notebook can be found in the `notebooks` folder and also on google colab: [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/ssciwr/ammico/blob/main/ammico/notebooks/DemoNotebook_ammico.ipynb).

There are further sample notebooks in the `notebooks` folder for the more experimental features:
1. Topic analysis: Use the notebook `get-text-from-image.ipynb` to analyse the topics of the extraced text.\
**You can run this notebook on google colab: [![Open In Colab](https://colab.research.google.com/github/ssciwr/ammico/blob/main/ammico/notebooks/get-text-from-image.ipynb)**
**You can run this notebook on google colab: [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/ssciwr/ammico/blob/main/ammico/notebooks/get-text-from-image.ipynb)**
Place the data files and google cloud vision API key in your google drive to access the data.
1. To crop social media posts use the `cropposts.ipynb` notebook.
**You can run this notebook on google colab: [![Open In Colab](https://colab.research.google.com/github/ssciwr/ammico/blob/main/ammico/notebooks/cropposts.ipynb)**
**You can run this notebook on google colab: [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/ssciwr/ammico/blob/main/ammico/notebooks/cropposts.ipynb)**

## Features
### Text extraction
The text is extracted from the images using [google-cloud-vision](https://cloud.google.com/vision). For this, you need an API key. Set up your google account following the instructions on the google Vision AI website or as described [here](docs/source/set_up_credentials.md).
The text is extracted from the images using [google-cloud-vision](https://cloud.google.com/vision). For this, you need an API key. Set up your google account following the instructions on the google Vision AI website or as described [here](https://ssciwr.github.io/AMMICO/build/html/create_API_key_link.html).
You then need to export the location of the API key as an environment variable:
```
export GOOGLE_APPLICATION_CREDENTIALS="location of your .json"
Expand Down
2 changes: 1 addition & 1 deletion ammico/multimodal_search.py
Original file line number Diff line number Diff line change
Expand Up @@ -287,7 +287,7 @@ def load_tensors(self, name: str) -> torch.Tensor:
Returns:
features_image_stacked (torch.Tensor): tensors of images features.
"""
features_image_stacked = torch.load(name)
features_image_stacked = torch.load(name, weights_only=True)
return features_image_stacked

def extract_text_features(self, model, text_input: str) -> torch.Tensor:
Expand Down
4 changes: 2 additions & 2 deletions docs/source/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@
project = "AMMICO"
copyright = "2022, Scientific Software Center, Heidelberg University"
author = "Scientific Software Center, Heidelberg University"
release = "0.0.1"
release = "0.2.2"

# -- General configuration ---------------------------------------------------
# https://www.sphinx-doc.org/en/master/usage/configuration.html#general-configuration
Expand All @@ -31,7 +31,7 @@
"github_user": "ssciwr", # Username
"github_repo": "AMMICO", # Repo name
"github_version": "main", # Version
"conf_py_path": "/source/", # Path in the checkout to the docs root
"conf_py_path": "/docs/source/", # Path in the checkout to the docs root
}

templates_path = ["_templates"]
Expand Down
2 changes: 1 addition & 1 deletion pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ build-backend = "hatchling.build"

[project]
name = "ammico"
version = "0.2.1"
version = "0.2.2"
description = "AI Media and Misinformation Content Analysis Tool"
readme = "README.md"
maintainers = [
Expand Down
Loading