
SHINOZAKI Takashi
| Department of Informatics | Associate Professor |
Last Updated :2025/10/31
■Researcher basic information
Researcher number
10442972
J-Global ID
Profile
The aim of our study is to clarify the consciousness process of visual information in the brain.
We are measuring brain responses under transient processes of binocular rivalry, ambiguous perceptual phenomena, using magnetoencephalography (MEG).
Research Keyword
- Neural Network Artificial Intelligence Deep Learning Computational Neuroscience Vision
Research Field
■Career
Career
- 2022/04 - Today Kindai UniversityFaculty of InfomaticsAssociate Professor
- 2010/12 - 2022/03 National Institute of Information and Communications TechnologyCenter for Information and Neural NetworksResearcher
- 2010/08 - 2010/11 RIKEN BSITemporal Researcher
- 2009/04 - 2010/07 Center for Neural Science, New York UniversityPostdoctral Fellow
- 2006/04 - 2009/03 RIKEN BSISpecial Postdoctoral Research Fellow
Educational Background
■Research activity information
Award
- 2022/07 Japanese Neural Network Society Best paper award
Biologically motivated learning method for deep neural networks using hierarchical competitive learning受賞者: Takashi Shinozaki - 2022/07 Japanese Neural Network Society Excellent research award
Explaining coarse visual processing in the subcortical pathway with convolutional neural networks受賞者: Chanseok LIM, Mikio INAGAKI, Takashi SHINOZAK, Ichiro FUJITA - 2019/09 日本神経回路学会 最優秀研究賞
顔表情弁別を行う畳み込みニューラルネットワークの内部における空間周波数特性受賞者: 小松優介;稲垣未来男;林燦碩;篠崎隆志;藤田一郎 - 2017/09 日本神経回路学会 優秀研究賞
局所的な競合度と順行伝播する教師信号で学習するディープニューラルネットワーク受賞者: 篠崎隆志
Paper
- Chanseok Lim; Mikio Inagaki; Takashi Shinozaki; Ichiro FujitaScientific Reports Springer Science and Business Media LLC 13 (1) 2023/07 [Refereed]
Abstract Perception of facial expression is crucial for primate social interactions. This visual information is processed through the ventral cortical pathway and the subcortical pathway. However, the subcortical pathway exhibits inaccurate processing, and the responsible architectural and physiological properties remain unclear. To investigate this, we constructed and examined convolutional neural networks with three key properties of the subcortical pathway: a shallow layer architecture, concentric receptive fields at the initial processing stage, and a greater degree of spatial pooling. These neural networks achieved modest accuracy in classifying facial expressions. By replacing these properties, individually or in combination, with corresponding cortical features, performance gradually improved. Similar to amygdala neurons, some units in the final processing layer exhibited sensitivity to retina-based spatial frequencies (SFs), while others were sensitive to object-based SFs. Replacement of any of these properties affected the coordinates of the SF encoding. Therefore, all three properties limit the accuracy of facial expression information and are essential for determining the SF representation coordinate. These findings characterize the role of the subcortical computational processes in facial expression recognition. - Mikio Inagaki; Tatsuro Ito; Takashi Shinozaki; Ichiro FujitaFrontiers in Psychology Frontiers Media SA 13 2022/11 [Refereed]
Cultural similarities and differences in facial expressions have been a controversial issue in the field of facial communications. A key step in addressing the debate regarding the cultural dependency of emotional expression (and perception) is to characterize the visual features of specific facial expressions in individual cultures. Here we developed an image analysis framework for this purpose using convolutional neural networks (CNNs) that through training learned visual features critical for classification. We analyzed photographs of facial expressions derived from two databases, each developed in a different country (Sweden and Japan), in which corresponding emotion labels were available. While the CNNs reached high rates of correct results that were far above chance after training with each database, they showed many misclassifications when they analyzed faces from the database that was not used for training. These results suggest that facial features useful for classifying facial expressions differed between the databases. The selectivity of computational units in the CNNs to action units (AUs) of the face varied across the facial expressions. Importantly, the AU selectivity often differed drastically between the CNNs trained with the different databases. Similarity and dissimilarity of these tuning profiles partly explained the pattern of misclassifications, suggesting that the AUs are important for characterizing the facial features and differ between the two countries. The AU tuning profiles, especially those reduced by principal component analysis, are compact summaries useful for comparisons across different databases, and thus might advance our understanding of universality vs. specificity of facial expressions across cultures. - Takashi ShinozakiNeural Networks Elsevier {BV} 144 271 - 278 0893-6080 2021/12 [Refereed]
- Ken-ichi Okada; Kenichiro Miura; Michiko Fujimoto; Kentaro Morita; Masatoshi Yoshida; Hidenaga Yamamori; Yuka Yasuda; Masao Iwase; Mikio Inagaki; Takashi Shinozaki; Ichiro Fujita; Ryota HashimotoScientific Reports Springer Science and Business Media LLC 11 (1) 3237 - 3237 2021/02 [Refereed]
Abstract Schizophrenia affects various aspects of cognitive and behavioural functioning. Eye movement abnormalities are commonly observed in patients with schizophrenia (SZs). Here we examined whether such abnormalities reflect an anomaly in inhibition of return (IOR), the mechanism that inhibits orienting to previously fixated or attended locations. We analyzed spatiotemporal patterns of eye movement during free-viewing of visual images including natural scenes, geometrical patterns, and pseudorandom noise in SZs and healthy control participants (HCs). SZs made saccades to previously fixated locations more frequently than HCs. The time lapse from the preceding saccade was longer for return saccades than for forward saccades in both SZs and HCs, but the difference was smaller in SZs. SZs explored a smaller area than HCs. Generalized linear mixed-effect model analysis indicated that the frequent return saccades served to confine SZs’ visual exploration to localized regions. The higher probability of return saccades in SZs was related to cognitive decline after disease onset but not to the dose of prescribed antipsychotics. We conclude that SZs exhibited attenuated IOR under free-viewing conditions, which led to restricted scene scanning. IOR attenuation will be a useful clue for detecting impairment in attention/orienting control and accompanying cognitive decline in schizophrenia. - Hirokazu Takahashi; Ali Emami; Takashi Shinozaki; Naoto Kunii; Takeshi Matsuo; Kensuke KawaiComputers in Biology and Medicine Elsevier BV 125 104016 - 104016 0010-4825 2020/10 [Refereed]
OBJECTIVE: In long-term video-monitoring, automatic seizure detection holds great promise as a means to reduce the workload of the epileptologist. A convolutional neural network (CNN) designed to process images of EEG plots demonstrated high performance for seizure detection, but still has room for reducing the false-positive alarm rate. METHODS: We combined a CNN that processed images of EEG plots with patient-specific autoencoders (AE) of EEG signals to reduce the false alarms during seizure detection. The AE automatically logged abnormalities, i.e., both seizures and artifacts. Based on seizure logs compiled by expert epileptologists and errors made by AE, we constructed a CNN with 3 output classes: seizure, non-seizure-but-abnormal, and non-seizure. The accumulative measure of number of consecutive seizure labels was used to issue a seizure alarm. RESULTS: The second-by-second classification performance of AE-CNN was comparable to that of the original CNN. False-positive seizure labels in AE-CNN were more likely interleaved with "non-seizure-but-abnormal" labels than with true-positive seizure labels. Consequently, "non-seizure-but-abnormal" labels interrupted runs of false-positive seizure labels before triggering an alarm. The median false alarm rate with the AE-CNN was reduced to 0.034 h-1, which was one-fifth of that of the original CNN (0.17 h-1). CONCLUSIONS: A label of "non-seizure-but-abnormal" offers practical benefits for seizure detection. The modification of a CNN with an AE is worth considering because AEs can automatically assign "non-seizure-but-abnormal" labels in an unsupervised manner with no additional demands on the time of the epileptologist. - Ryohei Fukuma; Takufumi Yanagisawa; Manabu Kinoshita; Takashi Shinozaki; Hideyuki Arita; Atsushi Kawaguchi; Masamichi Takahashi; Yoshitaka Narita; Yuzo Terakawa; Naohiro Tsuyuguchi; Yoshiko Okita; Masahiro Nonaka; Shusuke Moriuchi; Masatoshi Takagaki; Yasunori Fujimoto; Junya Fukai; Shuichi Izumoto; Kenichi Ishibashi; Yoshikazu Nakajima; Tomoko Shofuda; Daisuke Kanematsu; Ema Yoshioka; Yoshinori Kodama; Masayuki Mano; Kanji Mori; Koichi Ichimura; Yonehiro Kanemura; Haruhiko KishimaScientific reports 9 (1) 20311 - 20311 2019/12 [Refereed]
Identification of genotypes is crucial for treatment of glioma. Here, we developed a method to predict tumor genotypes using a pretrained convolutional neural network (CNN) from magnetic resonance (MR) images and compared the accuracy to that of a diagnosis based on conventional radiomic features and patient age. Multisite preoperative MR images of 164 patients with grade II/III glioma were grouped by IDH and TERT promoter (pTERT) mutations as follows: (1) IDH wild type, (2) IDH and pTERT co-mutations, (3) IDH mutant and pTERT wild type. We applied a CNN (AlexNet) to four types of MR sequence and obtained the CNN texture features to classify the groups with a linear support vector machine. The classification was also performed using conventional radiomic features and/or patient age. Using all features, we succeeded in classifying patients with an accuracy of 63.1%, which was significantly higher than the accuracy obtained from using either the radiomic features or patient age alone. In particular, prediction of the pTERT mutation was significantly improved by the CNN texture features. In conclusion, the pretrained CNN texture features capture the information of IDH and TERT genotypes in grade II/III gliomas better than the conventional radiomic features. - Shinozaki TNeurIPS Workshop on Shared Visual Representations in Human &Machine Intelligence (SVRHM) 2019/12 [Refereed]
- Ali Emami; Naoto Kunii; Takeshi Matsuo; Takashi Shinozaki; Kensuke Kawai; Hirokazu TakahashiComputers in biology and medicine 110 227 - 233 2019/07 [Refereed]
INTRODUCTION: Epileptologists could benefit from a diagnosis support system that automatically detects seizures because visual inspection of long-term electroencephalograms (EEGs) is extremely time-consuming. However, the diversity of seizures among patients makes it difficult to develop universal features that are applicable for automatic seizure detection in all cases, and the rarity of seizures results in a lack of sufficient training data for classifiers. METHODS: To overcome these problems, we utilized an autoencoder (AE), which is often used for anomaly detection in the field of machine learning, to perform seizure detection. We hypothesized that multichannel EEG signals are compressible by AE owing to their spatio-temporal coupling and that the AE should be able to detect seizures as anomalous events from an interictal EEG. RESULTS: Through experiments, we found that the AE error was able to classify seizure and nonseizure states with a sensitivity of 100% in 22 out of 24 available test subjects and that the AE was better than the commercially available software BESA and Persyst for half of the test subjects. CONCLUSIONS: These results suggest that the AE error is a feasible candidate for a universal seizure detection feature. - Emami, A.; Kunii, N.; Matsuo, T.; Shinozaki, T.; Kawai, K.; Takahashi, H.NeuroImage: Clinical 22 101684 2213-1582 2019/02 [Refereed]
- 顔表情弁別を行う畳み込みニューラルネットワークの内部における空間周波数特性小松優介; 稲垣未来男; 林燦碩; 篠崎隆志; 藤田一郎電子情報通信学会技術研究報告 118 (367) 5 - 10 2018/12
- CNNにおける扁桃体細胞類似特性獲得のための視覚体験的学習法林燦碩; 稲垣未来男; 小松優介; 篠崎隆志; 藤田一郎電子情報通信学会技術研究報告 118 (322) 5 - 10 2018/11
- Shinozaki TNIPS Workshop on Deep Learning: Bridging Theory and Practice (DLTP) 2017 [Refereed]
- Kobe University, NICT and University of Siegen on the TRECVID 2017 AVS taskHe Z; Shinozaki T; Shirahama K; Grzegorzek M; Uehara KProceedings of TREC Video Retrieval Evaluation (TRECVID) 2017
- 深層学習と視覚的特徴の基底抽出篠崎隆志視覚学会論文誌 29 (3) 86 - 89 2017 [Invited]
- 人工知能の革新としての深層学習篠崎隆志法とコンピュータ 35 23 - 28 2017 [Invited]
- Curriculum Learningを用いたネットワーク群による効率的な大規模動画像検索松本泰幸; 篠崎隆志; 白浜公章; 上原邦昭情報処理学会研究報告 CVIM-206 (2) 2017
- Shinozaki TNIPS Workshop on Representation Learning in Artificial and Biological Neural Networks (MLINI) 2016 [Refereed]
- Kobe University, NICT and University of Siegen on the TRECVID 2016 AVS taskMatsumoto Y; Shinozaki T; Shirahama K; Grzegorzek M; Uehara KProceedings of TREC Video Retrieval Evaluation (TRECVID) 2016
- Takashi ShinozakiNEURAL INFORMATION PROCESSING, ICONIP 2016, PT IV 9950 381 - 388 0302-9743 2016 [Refereed]
- Takashi Shinozaki; Yasushi Naruse; Hideyuki CâteauNeural Networks 46 91 - 98 0893-6080 2013/10 [Refereed]
- Yoichi Miyawaki; Takashi Shinozaki; Masato OkadaJOURNAL OF COMPUTATIONAL NEUROSCIENCE 33 (2) 405 - 419 0929-5313 2012/10 [Refereed]
- Makoto Kaibara; Yoshihito Hayashi; Takashi Shinozaki; Isao Uchimura; Hiroshi Ujiie; Yoshiaki SuzukiJournal of Biorheology 24 (1) 36 - 41 1867-0466 2010/12 [Refereed]
- Takashi Shinozaki; Masato Okada; Alex D. Reyes; Hideyuki CateauPHYSICAL REVIEW E 81 (1) 011913 1539-3755 2010/01 [Refereed]
- Shinozaki, T.; Takeda, T.Electronics and Communications in Japan 91 (4) 1942-9533 2008
- Takashi Shinozaki; Hideyuki Cateau; Hidetoshi Urakubo; Masato OkadaJOURNAL OF THE PHYSICAL SOCIETY OF JAPAN 76 (4) 044806 0031-9015 2007/04 [Refereed]
- Shinozaki T; Takeda TIEEJ Trans. EIS The Institute of Electrical Engineers of Japan 127 (5) 679 - 685 0385-4221 2007 [Refereed]
Binocular rivalry is a phenomenon created by presenting similar but different images for both eyes simultaneously. Many previous studies have investigated various brain responses to binocular rivalry. However, a response of the perceptual transition in binocular rivalry has not been clear yet. The present study aimed to measure the response of the perceptual transition in binocular rivalry using a motion rivalry stimuli with various motion angles. It is known that the perception of motion rivalry stimuli has two conditions depending on the angle between two motion directions. One is a rivalrous condition that cause binocular rivalry and the perceptual transition, and the other is a fused condition that does not cause them. Visual evoked fields (VEFs) were recorded with five healthy subjects using a 440-channel whole-head magnetoencephalogram (MEG) system. We classified trials to rivalrous or fused conditions, and calculated time averages of root mean square (RMS) values for every 100 ms in each condition. As a result, the time average of RMS values of the rivalrous condition were significantly larger than those of the fused condition after 400 ms post-stimulus. These results suggested that the perceptual transition in binocular rivalry increased the late MEG component. - Kaibara M; Shinozaki T; Kita R; Iwata H; Ujiie H; Sasaki K; Li JY; Sawasaki T; Ogawa HJournal of Japanese Society of Biorheology JAPANESE SOCIETY OF BIORHEOLOGY 20 (1) 35 - 43 0913-4778 2006 [Refereed]
We reported previously that human coagulation factor IX (F-IX), when activated by normal human red blood cells (RBCs). causes coagulation. We also identified and characterized the F-IX-activating enzyme in the normal RBC membrane. In the present study, the coagulation of blood in experimental animals, including swine, dogs, rabbits, cattle and sheep, was compared to that in humans, with special reference to the procoagulant activity of RBCs. Rheological measurement showed that coagulation of platelet-free plasma (PFP) in a polypropylene tube did not occur in any of the species. In swine as in humans, coagulation of PFP supplemented with RBCs (RBCs/PFP) occurred. However, in dogs, rabbits, sheep or cattle coagulation of RBCs/PFP did not occur. Fluorescence assays of RBC membranes using a synthetic fluorogenic substrate suggested that F-IX-activating enzyme may be present in swine, dog and rabbit as well as human RBC membranes, but its level may be very low in sheep and bovine membranes. Our data suggest that there is a significant difference in procoagulant activity of RBCs among animal species. In addition, they suggest that appropriate selection of animal species would be important for studying venous thrombus formation, including the evaluation of anticoagulability of materials under stagnant flow conditions. - T. Shinozaki; T. TakedaNeurology and Clinical Neurophysiology 2004 108 1526-8748 2004 [Refereed]
- Atsuo Takahashi; Rio Kita; Takashi Shinozaki; Kenji Kubota; Makoto KaibaraColloid and Polymer Science 281 (9) 832 - 838 0303-402X 2003/09 [Refereed]
- 篠崎 隆志; 岩田 宏紀; 喜多 理王; 貝原 真; 酒向 隆司; 飯塚 裕彦; 萩谷 昇; 李 俊佑; 澤崎 徹; 小川 博之日本バイオレオロジー学会年会抄録集 (NPO)日本バイオレオロジー学会 24回 79 - 79 2001/05
MISC
- 木下学; 木下学; 福間良平; 柳澤琢史; 柳澤琢史; 篠崎隆志; 貴島晴彦; 高橋雅道; 成田善孝; 有田英之; 有田英之; 藤本康倫; 藤本康倫; 寺川雄三; 露口尚弘; 深井順也; 沖田典子; 高垣匡寿; 石橋謙一; 児玉良典; 埜中正博; 森内秀祐; 泉本修一; 中島義和; 森鑑二; 正札智子; 正札智子; 市村幸一; 金村米博; 金村米博 日本脳腫瘍学会プログラム・抄録集 35th- 83 2017
- 篠崎 隆志 電子情報通信学会技術研究報告 = IEICE technical report : 信学技報 116- (120) 229 -234 2016/07
- SHINOZAKI Takashi; YOKOTA Yusuke; NARUSE Yasushi IEICE technical report. Neurocomputing 114- (515) 211 -216 2015/03
- 篠崎 隆志; 成瀬 康 人工知能学会全国大会論文集 28- 1 -4 2014
- Takashi Shinozaki; Yasushi Naruse I-PERCEPTION 5- (4) 380 -380 2014
- HAYAKAWA Tomoe; NARUSE Yasushi; MORITO Yusuke; SHINOZAKI Takashi; UMEHARA Hiroaki Technical report of IEICE. HIP 112- (283) 101 -106 2012/11
- SHINOZAKI Takashi; NARUSE Yasushi; MURATA Tsutomu; UMEHARA Hiroaki IEICE technical report. Neurocomputing 111- (315) 23 -26 2011/11
- Shinozaki T; Cateau H; Okada M Meeting abstracts of the Physical Society of Japan 62- (2) 306 -306 2007/08
- Shinozaki T; Cateau H; Urakubo H; Okada M Meeting abstracts of the Physical Society of Japan 62- (1) 305 -305 2007/02
- SHINOZAKI Takashi; CATEAU Hidenori; URAKUBO Hidetoshi; OKADA Masato IEICE technical report 106- (279) 1 -6 2006/10
- SHINOZAKI Takashi; TAKEDA Tunehiro IEICE technical report. Neurocomputing 104- (99) 61 -66 2004/05
- 村上玄; 篠崎隆志; 岩田宏紀; 喜多理王; 貝原真 日本バイオレオロジー学会年会抄録集 23rd- 2000
Lectures, oral presentations, etc.
- 農作物の画像を対象としたディープラーニング入門 [Invited]篠崎 隆志農林水産省 次世代施設園芸地域展開促進事業 植物工場人材育成プログラム 2020/11
- 脳のしくみと人工知能 [Invited]篠崎 隆志和歌山大学 世界の情報通信研究を知る 2020/11
- 脳に学ぶ次世代 AI 技術 [Invited]篠崎隆志大阪国際サイエンスクラブ 金曜サイエンスサロン 2020/02
- 深層学習と脳の視覚情報処理 [Invited]篠崎 隆志九州工業大学 生命体工学セミナー 2020/01
- 農作物の画像を対象としたディープラーニング入門 [Invited]篠崎隆志農林水産省 次世代施設園芸地域展開促進事業 植物工場人材育成プログラム 2019/11
- 脳とニューラルネットワークと深層学習 [Invited]篠崎隆志関西学院大学 理工学部講演会 2019/11
- 医療における道具としてのAI技術 [Invited]篠崎隆志第53回日本てんかん学会学術集会 2019/11
- Biologically Inspired Representation Learning for Deep Neural Networks [Invited]Takashi ShinozakiThe 42nd Annual Meeting of the Japan Neuroscience Society 2019/07
- 畳み込みニューラルネットワークに見る脳の情報処理機構 [Invited]篠崎隆志電気通信大学 脳・医工学研究センター 研究セミナー 2019/07
- 農作物の画像を対象としたディープラーニング入門 [Invited]篠崎隆志農林水産省 次世代施設園芸地域展開促進事業 植物工場人材育成プログラム 2018/12
- 科学の道具としてのディープラーニング [Invited]篠崎隆志東京大学大学院薬学研究科 医療薬学特論 2018/10
- AI の基本と農業の可能性 [Invited]篠崎隆志農林水産省 次世代施設園芸地域展開促進事業 植物工場人材育成プログラム 2018/10
- ディープラーニングの種類と活用における選定基準 [Invited]篠崎隆志スマートアグリシン ポジウム 2018/04
- ChainerCV と OpenCV ではじめる物体検出 [Invited]篠崎隆志日本神経回路学会 第2回次世代脳型人工知能研究会 2018/03
- ディープラーニングによる画像情報処理と学習表現 [Invited]篠崎隆志東京大学大学院薬学研究科ヒト細胞創薬学寄付講座 2018/01
- ChainerCV と OpenCV ではじめる物体検出 [Invited]篠崎隆志日本神経回路学会第1回次世代脳型人工知能研究会 2017/09
- 深層学習と視覚的特徴の基底抽出 [Invited]篠崎隆志日本視覚学会2017冬季大会 2017/01
- 人工知能の革新としての深層学習 [Invited]篠崎隆志第 41 回法とコンピュータ学会総会・研 究会 2016/11
- Brain AI and Brain Science [Invited]篠崎隆志情報の認知と行動研究会ワークショップ 2016/10
- 深層学習と視覚のメカニズム [Invited]篠崎隆志第20回視覚科学フォーラム 2016/08
- ディープラーニングによるデータ解析と学習表現 [Invited]篠崎隆志第58回人工知能学会 分子生物情報研究会 (SIG-MBI) 2015/07
- ヒトのように学ぶディープラーニングの新しい学習法 [Invited]篠崎隆志計測自動制御学会 知能工学部会 第5回賢さの先端研究会 2015/07
Affiliated academic society
Research Themes
- Japan Society for the Promotion of Science:Grants-in-Aid for Scientific Research Grant-in-Aid for Challenging Research (Exploratory)Date (from‐to) : 2021/07 -2024/03Author : Ichiro Kuriki; Takashi Shinozaki本研究は,人間の視覚系を模倣した大規模な計算モデルである深層ニューラルネットワーク(Deep Neural Network: DNN)を用いて人間の視覚系と同様の情報処理を学習させたときに,DNNが人間の視覚と同じように錯視(形,色,動きなどが本来とは違う現象)を生じるかを確認することで,DNNが人間の視覚系を研究する上での計算機モデルとして適切であるかを評価することを目的とする. R3年度の研究では,まず深層ニューラルネットワーク(DNN)を用いた視覚研究を実施するためのシステムを構築した.具体的には,高速な繰り返し計算に用いる画像処理装置(GPU)を搭載したコンピュータを調達し,その中に DNN を実装するためのシステムを導入した.手始めに,実験を担当する学生の卒業研究として視覚研究に基づいて構成された動画処理用の DNN である PredNet を用いた錯視画像(「蛇の回転」)の情報処理に関する研究を行なった.この DNN を用いた研究では,先行研究(Watanabe et al., 2018)において用いられたものと同じ Chainer を用いて実装した深層学習プログラムを用い,学習させる動画像セットに工夫を施して学習効率の変化を調べることにより,学習されている画像特徴の推定を試みた.この研究は学部4年の卒業研究として実施され,学生は卒業論文を提出して卒業し,大学院へ進学した.この知見は視覚メカニズムを研究するための計算モデルとして深層学習を評価する上で重要であり,R4年度に国内の学会にて成果発表を行う予定である.この研究を今後も発展させつつ,本題である#TheDress画像の問題を対象とした DNN の研究を推進していく.
- Japan Society for the Promotion of Science:Grants-in-Aid for Scientific Research Grant-in-Aid for Scientific Research (C)Date (from‐to) : 2020/04 -2023/03Author : Takashi Shinozaki生体の脳のような高いエネルギー効率をもつ情報処理システムの実現を目標に、脳の高効率性の要因の一つであると考えられる、情報のゲーティングを明らかにするために、神経集団における集団発火の伝播のシミュレーションおよびにその解析を行った。前年度までに進めてきたFokker-Planck方程式による定式化を用いて、Synfire Chainと呼ばれる神経細胞集団中での同期発火の伝播モデルにおいて、一般に用いられている線形なLeaky Integrate-and-Fire (LIF) モデルと、Naイオン電流の項を持つ非線形なExponential Integrate-and-Fire (EIF) モデルの比較を行った。その結果、自発発火を起こし、Naイオン電流が活性化されるような条件下では、EIFモデルの膜電位分布が広がり、非同期な状態となることが示された。この非同期状態は、自発発火の存在に強く依存するため、微弱な抑制性入力による弱い過分極によって容易に消滅する。このことは、微弱な抑制性入力によって神経集団の膜電位の同期状態が制御可能であり、集団発火の伝播のゲーティングが可能となることを示唆している。これらの研究結果は2021年の北米神経科学会の年大会において発表された。本研究をさらに推進することによって、環境ノイズをうまく利用しつつ脳のように高いエネルギー効率での情報処理を可能とするシステムや、脳における注意のメカニズムなどの新しい知見が得られることが期待される。
- Japan Society for the Promotion of Science:Grants-in-Aid for Scientific Research Grant-in-Aid for Challenging Exploratory ResearchDate (from‐to) : 2015/04 -2018/03Author : Kuriki Ichiro; Takashi ShinozakiWe investigated and established a method to measure the distribution of visual attention by using visually evoked potential (VEP). When an image of visual scene is sectored and each sector flickers in time, VEPs with corresponding temporal frequency (SSVEP) will be induced. By monitoring the changes in amplitude of this SSVEP we can monitor the state of visual attention at each sector. In the present study, we sectored left/right eye images in different ways; e.g., left-eye image was sectored vertically and right-eye image was sectored horizontally. If the number of vertical and horizontal sectors are 5 and 3, respectively, the total number of frequency used is 8 but we can obtain attentional state of 15 sectors. According to the reduction of SSVEP frequencies, it was possible to obtain better S/N ratio, and we succeeded in estimating the focus of attention by this method.
- Japan Society for the Promotion of Science:Grants-in-Aid for Scientific Research Grant-in-Aid for Young Scientists (B)Date (from‐to) : 2012/04 -2015/03Author : SHINOZAKI TakashiWe studied brain responses under binocular rivalry to clarify the temporal dynamics of visual perception. Binocular rivalry is a visual phenomena which causes temporally random perceptual alternation by presenting two different images to each eye. Since recording of brain responses usually requires several tens of iterations, and is not suitable to record such a temporally random responses, we developed a new method called 'phase template analysis'. The new method enabled a one-shot recording of the brain responses under binocular rivalry, and clarified the temporal process of the visual perception. The method also applied to develop a brain machine interface (BMI), resulting a biped robot controlled by brain responses recording with electroencephalogram (EEG).