Human brain receives and processes information through sensory organs, such as eyes, nose and mouth. Modality refers to a variety of ways in which something happens or we experience something. We must characterize it as multimodal to utilize this modality. Multimodal data is composed of different types of information and is classified into distinct characteristics. A method for learning various types of data features such as image data, text data, and sensor data is called multimodal learning. It can offer different types of information from various angle, hence it is necessary for high-level and human-like data analysis. We focus on integrating multimodal information into a large-scale.

meChat3.png

meChat: In-Device Personal Assistant for Conversational Photo Sharing IEEE Internet Computing, Volume 23 (2019) (IF: 4.231)

MoCA (2).png

MoCA+: incorporating user modeling into mobile contextual advertising Middleware 2017 (demo)

meCurate.png

meCurate: Personalized Curation Service using a Tiny Text Intelligence WWW 2017 (demo))

MoCA (1).png

Demo: Mobile Contextual Advertising Platform based on Tiny Text Intelligence MobiSys 2017 (demo)


캡처.PNG