Yoshitaka USHIKU

J-GLOBAL         Last updated: Oct 19, 2018 at 17:46
 
Avatar
Name
Yoshitaka USHIKU
E-mail
yoshitaka.ushikusinicx.com
URL
http://www.mi.t.u-tokyo.ac.jp/ushiku/
Affiliation
The University of Tokyo
Section
Grad School of Information Science & Technology
Job title
Lecturer
Degree
Ph. D.(The University of Tokyo)
Twitter ID
losnuevetoros

Published Papers

 
Kohei Uehara,Antonio Tejero-de-Pablos,Yoshitaka Ushiku,Tatsuya Harada
CoRR   abs/1808.01821    2018   [Refereed]
Kuniaki Saito,Kohei Watanabe,Yoshitaka Ushiku,Tatsuya Harada
CoRR   abs/1712.02560    2017   [Refereed]
Yuji Tokozume,Yoshitaka Ushiku,Tatsuya Harada
CoRR   abs/1711.10284    2017   [Refereed]
Yuji Tokozume,Yoshitaka Ushiku,Tatsuya Harada
CoRR   abs/1711.10282    2017   [Refereed]
Katsunori Ohnishi,Shohei Yamamoto,Yoshitaka Ushiku,Tatsuya Harada
CoRR   abs/1711.09618    2017   [Refereed]

Misc

 
Kohei Uehara, Antonio Tejero-De-Pablos, Yoshitaka Ushiku, Tatsuya Harada
   Aug 2018
Traditional image recognition methods only consider objects belonging to
already learned classes. However, since training a recognition model with every
object class in the world is unfeasible, a way of getting information on
unknown objects (i.e....
日髙 雅俊, 木倉 悠一郎, 牛久 祥孝, 原田 達也
画像ラボ   29(6) 24-30   Jun 2018
Kuniaki Saito, Shohei Yamamoto, Yoshitaka Ushiku, Tatsuya Harada
   Apr 2018
Numerous algorithms have been proposed for transferring knowledge from a
label-rich domain (source) to a label-scarce domain (target). Almost all of
them are proposed for a closed-set scenario, where the source and the target
domain completely sha...
Andrew Shin, Yoshitaka Ushiku, Tatsuya Harada
   Apr 2018
Image description task has been invariably examined in a static manner with
qualitative presumptions held to be universally applicable, regardless of the
scope or target of the description. In practice, however, different viewers may
pay attention...
Atsushi Kanehira, Luc Van Gool, Yoshitaka Ushiku, Tatsuya Harada
   Apr 2018
This paper introduces a novel variant of video summarization, namely building
a summary that depends on the particular aspect of a video the viewer focuses
on. We refer to this as Tex. To infer what the desired
$\textit{viewpoint}...

Conference Activities & Talks

 
村田 隆英, 木村 昭悟, 牛久 祥孝, 山下 隆義, 山内 悠嗣, 藤吉 弘亘
電子情報通信学会技術研究報告 = IEICE technical report : 信学技報   24 Mar 2016   
村田 隆英, 木村 昭悟, 牛久 祥孝, 山下 隆義, 山内 悠嗣, 藤吉 弘亘
電子情報通信学会技術研究報告 = IEICE technical report : 信学技報   24 Mar 2016   
GUNJI Naoyuki, HIGUCHI Takayuki, YASUMOTO Koki, MURAOKA Hiroshi, USHIKU Yoshitaka, HARADA Tatsuya, KUNIYOSHI Yasuo
Technical report of IEICE. PRMU   14 Mar 2013   
Recent years, fine-grained classification has been studied intensively. But, various local descriptors and encodings proposed for general object classification have not been applied to fine-grained classification many times. In this work, we evalu...
KANEZAKI Asako, INABA Sho, USHIKU Yoshitaka, YAMASHITA Yuya, MURAOKA Hiroshi, HARADA Tatsuya, KUNIYOSHI Yasuo
電子情報通信学会技術研究報告 : 信学技報   2 Sep 2012   
We propose an efficient method to train multiple object detectors simultaneously using a large-scale image dataset. The one-vs-all approach that optimizes the boundary between positive samples from a target class and negative samples from the othe...
KANEZAKI Asako, INABA Sho, USHIKU Yoshitaka, YAMASHITA Yuya, MURAOKA Hiroshi, HARADA Tatsuya, KUNIYOSHI Yasuo
Technical report of IEICE. PRMU   2 Sep 2012   
We propose an efficient method to train multiple object detectors simultaneously using a large-scale image dataset. The one-vs-all approach that optimizes the boundary between positive samples from a target class and negative samples from the othe...