About

Profile Image

My name is Yoichi Ishibashi (石橋陽一 in Japanese). I am a researcher at Kyoto University. My research is focused on Deep Learning and Natural Language Processing.

Twitter, Google Scholar, LinkedIn, GitHub and Email

Bio

Yoichi Ishibashi is a Researcher at Kyoto University. He received a bachelor's degree from Kyoto Sangyo University, Kyoto, Japan, in 2018, and master's and Ph.D. degrees from Nara Institute of Science and Technology (NAIST) in 2020 and 2023, respectively. His research interests include natural language processing (NLP), representation learning, foundation model, and prompt learning.


News



Selected Projects

Evaluating the Robustness of Discrete Prompts


Discrete prompts have been used for fine-tuning Pre-trained Language Models for diverse NLP tasks. In particular, automatic methods that generate discrete prompts from a small set of training instances have reported superior performance. However, a closer look at the learnt prompts reveals that they contain noisy and counter-intuitive lexical constructs that would not be encountered in manually-written prompts. This raises an important yet understudied question regarding the robustness of automatically learnt discrete prompts when used in downstream tasks. To address this question, we conduct a systematic study of the robustness of discrete prompts by applying carefully designed perturbations into an application using AutoPrompt and then measure their performance in two Natural Language Inference (NLI) datasets. Our experimental results show that although the discrete prompt-based method remains relatively robust against perturbations to NLI inputs, they are highly sensitive to other types of perturbations such as shuffling and deletion of prompt tokens. Moreover, they generalize poorly across different NLI datasets. We hope our findings will inspire future work on robust discrete prompt learning.

Accepted to EACL 2023

Subspace-based Set Operations on a Pre-trained Word Embedding Space


Word embedding is a fundamental technology in natural language processing. It is often exploited for tasks using sets of words, although standard methods for representing word sets and set operations remain limited. If we can leverage the advantage of word embedding for such set operations, we can calculate sentence similarity and find words that effectively share a concept with a given word set in a straightforward way. In this study, we formulate representations of sets and set operations in a pre-trained word embedding space. Inspired by quantum logic, we propose a novel formulation of set operations using subspaces in a pre-trained word embedding space. Based on our definitions, we propose two metrics based on the degree to which a word belongs to a set and the similarity between embedding two sets. Our experiments with Text Concept Set Retrieval and Semantic Textual Similarity tasks demonstrated the effectiveness of our proposed method.


Reflection-based Word Attribute Transfer


In natural language processing, words are represented as the vector. PMI-based word embeddings such as word2vec can caputure word vector's semantic operation that is called analogy such as king - man + woman ~ queen. These relations can be used for changing a word’s gender attribute from king to queen. This attribute transfer can be performed by subtracting a difference vector man - woman from king when we have explicit knowledge of the gender of given word king. However, this knowledge cannot be developed for various words and attributes in practice. For transferring queen into king in this analogy-based manner, we need to know that queen denotes a female and add the difference vector to change its gender attribute. In this work, we transfer such binary attributes based on an assumption that such transfer mapping will become identity mapping when we apply it twice. The proposed framework is based on reflection mapping that satisfies this property; queen should be transferred back to king with the same mapping as the transfer from king to queen. Experimental results show that the proposed method can transfer the word attributes of the given words, and does not change the words that do not have the target attributes.

Accepted to ACL 2020 Student Research Workshop
Awards: Incentive Award (YANS 2019), Excellent Research Award (SIG-NL), Excellent Award (NLP 2020)

Generating Responses based on Information Visually-Induced by Text Utterance


In research on Neural Machine Translation, there are studies that generate translated sentences using both images and sentences, and these studies show that visual information improves translation performance. However, it is not possible to use sentence generation algorithms using images for the dialogue systems since many text-based dialogue systems only accept text input. In this paper, we propose Associative Conversation Model that generates image feature vectors from textual vectors and uses it for generating sentences in order to utilize visual information in a dialogue system without image input. A comparative experiment between our proposed model and a model without association showed that our proposed method improves context-dependent score and informative score by associating visual information related to sentences.


Artificial Tactile

We can recognize objects in our brain's somatosensory system from pressure information when we touch them with our hands. Based on this idea, I developed an artificial tactile sense by combining pressure sensors and convolutional neural networks (CNN). Artificial tactile sensation enables us to recognize objects held in gloves. The artificial skin glove has 36 (6×6) pressure sensors in the palm of the glove. The CNN classifies objects using information about the pressure distribution on the glove when an object is grasped with the glove.

Awards: 20th Anniversary Special Award (Techno-Ai 2016), Incentive Award (Techno-Ai 2016)


Publications

Journal Articles

Reflection-based Word Attribute Transfer
Yoichi Ishibashi, Katsuhito Sudoh, Koichiro Yoshino, Satoshi Nakamura
Journal of Natural Language Processing 28 (1) March 2021


International Conferences/Workshops

Evaluating the Robustness of Discrete Prompts
Yoichi Ishibashi, Danushka Bollegala, Katsuhito Sudoh, Satoshi Nakamura
Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics (EACL 2023), Croatia, May, 2023.

Multilingual Machine Translation Evaluation Metrics Fine-tuned on Pseudo-Negative Examples for WMT 2021 Metrics Task
Kosuke Takahashi, Yoichi Ishibashi, Katsuhito Sudoh, Satoshi Nakamura
Proceedings of the 6th Conference on Machine Translation (WMT 2021)

Reflection-based Word Attribute Transfer
Yoichi Ishibashi, Katsuhito Sudoh, Koichiro Yoshino, Satoshi Nakamura
Proceedings of Annual Meeting of the Association for Computational Linguistics (ACL 2020) Student Research Workshop, Seattle, USA, July 2020.

KSU Team’s Dialogue System at the NTCIR-13 Short Text Conversation Task 2
Yoichi Ishibashi, Sho Sugimoto, Hisashi Miyamori
NTCIR13


Preprint/Technical Reports

Subspace-based Set Operations on a Pre-trained Word Embedding Space
Yoichi Ishibashi, Sho Yokoi, Katsuhito Sudoh, Satoshi Nakamura
Nov. 2022

Generating Responses based on Information Visually-Induced by Text Utterance
Yoichi Ishibashi, Hisashi Miyamori
2018


Domestic Conferences/Workshops

正準角および部分空間に基づく BERTScore の拡張
石橋 陽一, 横井 祥, 須藤 克仁, 中村 哲
言語処理学会第29回年次大会 (NLP 2023), 沖縄

学習済み埋め込み空間の線型部分空間を用いた集合演算
石橋 陽一, 横井 祥, 須藤 克仁, 中村 哲
第36回 人工知能学会全国大会 (JSAI 2022), 京都

線型部分空間に基づく学習済み単語埋込空間上の集合演算
石橋 陽一, 横井 祥, 須藤 克仁, 中村 哲
言語処理学会第28回年次大会 (NLP 2022), オンライン, 若手奨励賞

単語属性変換で作成した疑似負例データを用いた自動機械翻訳評価
高橋 洸丞, 石橋 陽一, 須藤 克仁, 中村 哲
言語処理学会第28回年次大会 (NLP 2022), オンライン

学習済み埋め込み空間における集合演算
石橋 陽一, 横井 祥, 須藤 克仁, 中村 哲
第24回情報論的学習理論ワークショップ (IBIS 2021)

量子論理に基づく単語埋込集合間の論理演算
石橋 陽一, 横井 祥, 須藤 克仁, 中村 哲
NLP若手の会第16回シンポジウム (YANS 2021)

単語属性変換による自然言語推論データの拡張
石橋 陽一, 須藤 克仁, 中村 哲
言語処理学会第27回年次大会 (NLP 2021)

鏡映変換に基づく埋め込み空間上の単語属性変換
石橋 陽一, 須藤 克仁, 吉野 幸一郎, 中村 哲
SIG-NL (優秀研究賞),
YANS 2019 (奨励賞),
言語処理学会第26回年次大会 (NLP 2020) (優秀賞)


質問文から連想した画像特徴量を用いた質問応答モデル
石橋 陽一, 森 泰, 木村 輔, 宮森 恒
言語処理学会第25回年次大会 (NLP 2019)

ペルソナベクトルの演算に応じた新たな個性での対話応答文生成
杉本 翔, 石橋 陽一, 宮森 恒
第10回データ工学と情報マネジメントに関するフォーラム (DEIM 2018)

連想対話モデル : 発話文から連想した視覚情報を用いた応答文生成
石橋 陽一, 宮森 恒
第10回データ工学と情報マネジメントに関するフォーラム (DEIM 2018)


Awards


Grants


rss facebook twitter github youtube mail spotify lastfm instagram linkedin google google-scolar google-plus pinterest medium vimeo stackoverflow reddit quora quora