-
Notifications
You must be signed in to change notification settings - Fork 0
Expand file tree
/
Copy pathcaption.qmd
More file actions
85 lines (70 loc) · 2.34 KB
/
caption.qmd
File metadata and controls
85 lines (70 loc) · 2.34 KB
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
# 5.2 Image Caption {.unnumbered}
This is a minimal example script showing how to do image
captioning using a set of images that are stored on
the same machine where we are running the models. To start,
we will load in a few modules that will be needed for the
task.
```{python}
from os import listdir
from os.path import splitext, join
from transformers import pipeline
from PIL import Image
import torch
import polars as pl
```
Next, we load the model that we are interested in using.
There are a large number of image classification algorithms
on HuggingFace; most can be used exactly the same way by simply
changing the name of the model in the function calls below.
```{python}
#| output: false
#| warning: false
model = pipeline(
task="image-to-text",
model="nlpconnect/vit-gpt2-image-captioning"
)
```
With the model loaded, the next step is to build a set of
paths to the images that we are interested in using. Here,
we will create a list of all of the image files in the
directory containing the FSA-OWI images. Then, to save time,
we take just the first five images to work with. We could
run over all of the images or use a different collection by
changing the variables below.
```{python}
collection = 'fsaowi'
num_images = 5
paths = sorted(listdir(join('img', collection)))
paths = [x for x in paths if splitext(x)[1] == '.jpg']
paths = paths[:num_images]
```
Now, we will load each of the images and run the model over
it. For each output, we save the path to the image along with
the generated caption.
```{python}
output = {'path': [], 'caption': []}
for path in paths:
image = Image.open(join('img', collection, path))
image = image.convert('RGB')
outputs = model(image)
output['path'] += [path]
output['caption'] += [outputs[0]['generated_text']]
```
The output is constructed such that we can call the `from_dict`
method from **polars** to construct a data frame. If needed, this
can be saved as a CSV file with the `write_csv` method of the
resulting data frame.
```{python}
dt = pl.from_dict(output)
dt
```
Here, we will display the images along with their captions to
visualize the output of the model.
```{python}
for path in paths:
image = Image.open(join('img', collection, path))
image = image.convert('RGB')
res = dt.filter(pl.col("path") == path)
print("'{0:s}':".format(res['caption'][0]))
display(image)
```