Skip to content

Commit

Permalink
Merge pull request #325 from Rishav-hub/patch-1
Browse files Browse the repository at this point in the history
Grammar fix
  • Loading branch information
johko authored Aug 12, 2024
2 parents d6c99cb + 69fe890 commit f366061
Showing 1 changed file with 1 addition and 2 deletions.
3 changes: 1 addition & 2 deletions chapters/en/unit4/multimodal-models/a_multimodal_world.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -48,8 +48,7 @@ A dataset consisting of multiple modalities is a multimodal dataset. Out of the
- Vision + Audio: [VGG-Sound Dataset](https://www.robots.ox.ac.uk/~vgg/data/vggsound/), [RAVDESS Dataset](https://zenodo.org/records/1188976), [Audio-Visual Identity Database (AVID)](https://www.avid.wiki/Main_Page).
- Vision + Audio + Text: [RECOLA Database](https://diuf.unifr.ch/main/diva/recola/), [IEMOCAP Dataset](https://sail.usc.edu/iemocap/).

Now let us see what kind of tasks can be performed using a multimodal dataset? There are many examples, but we will focus generally on tasks that contains the visual and textual
A multimodal dataset will require a model which is able to process data from multiple modalities, such a model is a multimodal model.
Now, let us see what kind of tasks can be performed using a multimodal dataset. There are many examples, but we will generally focus on tasks that contain both visual and textual elements. A multimodal dataset requires a model that is able to process data from multiple modalities. Such a model is called a multimodal model.

## Multimodal Tasks and Models

Expand Down

0 comments on commit f366061

Please sign in to comment.