Skip to content

Commit

Permalink
Update index.html
Browse files Browse the repository at this point in the history
  • Loading branch information
threefruits committed May 24, 2024
1 parent 661d546 commit 565d0a6
Showing 1 changed file with 40 additions and 4 deletions.
44 changes: 40 additions & 4 deletions index.html
Original file line number Diff line number Diff line change
Expand Up @@ -91,7 +91,6 @@ <h2>Robotics Researcher, CS PhD Student at NUS</h2>
</div>
<div class="col-lg-9 pt-4 pt-lg-0 content" data-aos="fade-left">
<h3>Anxing Xiao (肖岸星), Robotics Researcher &amp; Engineer</h3>

<p >
I am a Ph.D. student in Computer Science at the National University of Singapore, advised by <a href='https://www.comp.nus.edu.sg/~dyhsu/'>Prof. David Hsu</a>. My research primarily focuses on developing compositional reasoning algorithms and multimodal interaction systems for intelligent robots to adaptively perform assistive tasks in dynamic and open human-centered environments .
<!-- My research aims to develop autonomous robots that can reason and interact effectively with unstructured environments, especially human-centred environments. -->
Expand Down Expand Up @@ -208,6 +207,40 @@ <h2>Publications</h2>
<div class="row">

<!--<p>&nbsp;</p>-->



<div class="col-lg-12 pt-2 pt-lg-3 content" >

<!-- <h3><a href="https://scholar.google.com/citations?user=qrgIuiEAAAAJ&hl=en" target="_blank"> Google Scholar</a> </h3>
<br> -->

<h3>2024</h3>

<div class="row">
<div class="col-lg-12 item">

<p>
<strong>Octopi: Object Property Reasoning with Large Tactile-Language Models </strong>
<br>
<font color="#4e4e4e" size="4px">Samson Yu, Kelvin Lin, <strong>Anxing Xiao</strong>, Jiafei Duan, and Harold Soh </font>
<br>
<em>Accepted to Robotics: Science and Systems (RSS) 2024. </font>
</em>
<br>
<a href="https://arxiv.org/abs/2405.02794">arXiv</a>
/
<a href="https://octopi-tactile-lvlm.github.io/">Website</a>
/
<a href="https://github.com/clear-nus/octopi">Code</a>
</p>

</div>
</div>

</div>


<div class="col-lg-12 pt-2 pt-lg-3 content" >

<!-- <h3><a href="https://scholar.google.com/citations?user=qrgIuiEAAAAJ&hl=en" target="_blank"> Google Scholar</a> </h3>
Expand All @@ -223,7 +256,7 @@ <h3>2023</h3>
<br>
<font color="#4e4e4e" size="4px">Bingyi Xia, Hao Luan, Ziqi Zhao, Xuheng Gao, Peijia Xie, <strong>Anxing Xiao</strong>, Jiankun Wang, Max Q-H Meng</font>
<br>
<em>Accepted to IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) 2023. </font>
<em>IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) 2023. </font>
</em>
<br>
<a href="https://arxiv.org/abs/2303.06624">arXiv</a>
Expand Down Expand Up @@ -355,13 +388,12 @@ <h3>2020</h3>
<br>
<font color="#4e4e4e" size="4px">Yaqi Wu*, <strong>Anxing Xiao</strong>*, Haoyao Chen, Shiwu Zhang and Yunhui Liu</font>
<br>
<em>IEEE/ASME International Conference on Advanced Intelligent Mechatronics (AIM), Boston, MA, USA, July 2020.
<em>IEEE/ASME International Conference on Advanced Intelligent Mechatronics (AIM) 2020.
</em>
<br>
<a href="https://arxiv.org/abs/2107.00773">arXiv</a>
/
<a href="https://www.youtube.com/watch?v=5pzJ8U7YyGc">Video</a>

</p>

</div>
Expand Down Expand Up @@ -462,6 +494,8 @@ <h4>Robi Butler: Multimodal Remote Interaction with Household Robotic Assistants
The integration of above components allows Robi Butler to ground remote multimodal instruction in the real-world home environment in a zero-shot manner.
<br>
<a class="btn btn-outline-primary my-1 mr-1 btn-sm" href="assets/doc/robi.pdf" target="_blank">Paper</a>
<a class="btn btn-outline-primary my-1 mr-1 btn-sm" href="https://youtu.be/mf4WLQFGa8c" target="_blank">Video</a>

</p>

</div>
Expand All @@ -485,6 +519,8 @@ <h4>Octopi: Object Property Reasoning with Large Tactile-Language Models </h4>
In this work, we investigate combining tactile perception with language, which enables embodied systems to obtain physical properties through interaction and apply common-sense reasoning. We contribute a new dataset PHYSICLEAR, which comprises both physical/property reasoning tasks and annotated tactile videos obtained using a GelSight tactile sensor. We then introduce OCTOPI, a system that leverages both tactile representation learning and large vision-language models to predict and reason about tactile inputs with minimal language fine-tuning. Our evaluations on PHYSICLEAR show that OCTOPI is able to effectively use intermediate physical property predictions to improve physical reasoning in both trained tasks and for zero-shot reasoning.
<br>
<a class="btn btn-outline-primary my-1 mr-1 btn-sm" href="https://arxiv.org/pdf/2405.02794" target="_blank">Paper</a>
<a class="btn btn-outline-primary my-1 mr-1 btn-sm" href="https://octopi-tactile-lvlm.github.io/" target="_blank">Website</a>

</p>
</div>
</div>
Expand Down

0 comments on commit 565d0a6

Please sign in to comment.