• Home
  • |
  • Contact
Skip to content
Ubiquitous Robots 2022
  • About
    • What’s New
  • Program
    • Program Overview
    • Presentation Guideline
    • Plenary Talks
    • Special Talk
    • Keynote Speeches
    • Young Researcher Session
    • Workshops
    • Awards
  • Submission
    • Call for papers
    • Paper Submission
    • Late Breaking Result Session
    • Final Paper Submission
    • Call for Organized Sessions
    • Call for Workshops and Tutotials
  • Sponsors & Exhibitors
  • Registration
  • Travel & Accommodations
  • Committee
  • Privacy Policy

Photo Gallery

Photo Gallery
Photo Gallery Original Image Download
01_Opening Ceremony view 01_Opening Ceremony.zip
02_Plenary Talks view 02_Plenary Talks.zip
03_Keynote Speeches view 03_Keynote Speeches.zip
04_Young Researcher Sessions.zip view 04_Young Researcher Sessions.zip
05_Best Paper Award Competition Session.zip view 05_Best Paper Award Competition Session.zip
06_Exhibition view 06_Exhibition.zip
07_Banquet view 07_Banquet.zip
© 2023 Ubiquitous Robots 2022 • Built with GeneratePress
Korea Robotics Society (KROS) Privacy Policy
Business Registration Certificate : 214-82-07990 / President : Hye-Kyung Cho
Address : #506, The Korea Science and Technology Center, (635-4,Yeoksam-dong) 22, 7Gil, Teheran-ro, Gangnam-gu, Seoul, Korea
SECRETARIATTEL +82-2-783-0306, FAX +82-2-783-0307, E-MAIL kros@kros.org
Copyright © "Ubiquitous Robots 2021.
Hee-Sup Shin
Northwestern University
 
Bio-inspired soft strain sensing systems for measurement of wing deformation in small unmanned aerial vehicle
Abstract
Biological organisms demonstrate remarkable agility in complex environments, especially in comparison to engineered robotic systems. In part, this is due to an organism’s ability to detect disturbances and react to them quickly. Contrarily, small unmanned aerial vehicles (UAVs) often lose their flight stability in gusty environments. In this session, large-area soft strain sensing systems will be presented, which are designed to tackle the challenge of quickly sensing these disturbances on small UAVs in flight.
Biography
Hee-Sup Shin received the B.S. degree in mechanical engineering from Korea University, Seoul, South Korea, in 2013, and the M.S. and Ph.D. degrees in mechanical engineering from the Carnegie Mellon University, Pittsburgh, PA, USA, in 2015 and 2021, respectively. He is currently a postdoctoral research associate with the Querrey Simpson Institute for Bioelectronics, Northwestern University, Evanston, IL, USA. His research interests lie in soft sensors and actuators and their applications.
Christian Ott
Technische Universität Wien, Austria
 
Toward compliant robots that utilize their intrinsic body dynamics
Abstract
Inspired by the biological example of the human musculoskeletal system a large family of elastic actuator concepts have been proposed in the robotics research community. Still, robots driven by elastic actuators have not arrived at a maturity level as compared to state-of-the-art torque-controlled actuators, which nowadays are utilized in various commercial collaborative robots. While the general control properties of elastic robots are well understood, there is a gap in the realization of dynamic whole-body motions which can utilize the full capabilities of the elastic body dynamics. In the robotics literature whole-body motion generation and control has been dominated by the mastering of multi-body dynamics and either projection- or optimization-based methods. As a consequence, the resulting control actions can not be applied easily to the elastic robot dynamics and do not take the intrinsic elastic robot dynamics into account. In this talk I will give an overview of recent motion generation and control methods that aim at utilization of the natural dynamics and thus promise to allow a smoother application on elastic robots. A key concept in these methods will be the use of reduces models, which represent a subset of the robot’s motion capabilities exactly. This is in contrast to the concept of template models that represent an idealized behavior, which can only be approximated by the real robot. Such reduced models already have been successfully applied to dynamic locomotion tasks. The discussed ideas will be exemplified by experimental results with several torque controlled and elastic humanoid robots.

Biography
Christian Ott currently is Full Professor for Robotics at Technische Universität Wien in Vienna, Austria. He received his Dipl.-Ing. degree in Mechatronics from the University of Linz, Austria, in 2001 and the Dr.-Ing. degree from Saarland University, Germany, in 2005. From 2007 to 2009, he was working as a Project Assistant Professor at the Department of Mechano-Informatics, University of Tokyo, Japan. After that he has been a team leader at DLR and led a Helmholtz Young Investigators Group for “Dynamic Control of Legged Humanoid Robots” at DLR and Technical University of Munich (TUM). From 2014 to 2022 he was head of the department for “Analysis and Control of Advanced Robotic Systems” in the Institute of Robotics and Mechatronics at the German Aerospace Center (DLR). He has served as Associate Editor for the IEEE Transactions on Robotics and currently is Co-Editor-in-Chief for IFAC Mechatronics. He has been involved in several international conferences and was one of the General Chairs of Humanoids 2020. In 2018 he received an ERC consolidator grant on energy efficient locomotion for elastic robots. His current research interests include nonlinear robot control, elastic robots, whole-body control, impedance control, and control of humanoid robots.

Hyung-Soon Park
KAIST, Korea
 
What would be the best use of robots for neuro-rehabilitation?
Abstract
Since the late 1990s, rehabilitation robot has been a great model of medical robotics because the intrinsic characteristics of rehabilitation tasks matched well with the robotics technology available at the moment. Various exoskeleton and end-effector type rehabilitation robots have been developed for therapeutic and/or assistive tasks in upper limb and/or lower limb rehabilitation.

There is no doubt that robotic devices are effective in orthopedic rehabilitation since it reduces human resources and enables more intensive and longer exercises. Regarding the neuro-rehabilitation which aims at recovery of musculoskeletal control after the injury in the nervous system, recent studies tell us rehabilitation robots are not necessarily superior to the conventional physical therapy.

The key mechanism of neuro-rehabilitation might be represented by neuro-plasticity which has been the main interest in neurology. This presentation will discuss what would be the best use of robotics technology for promoting neuro-plasticity and introduce the recent research progress in the KAIST RENEW (Rehabilitation Engineering for Neurological disorders Worldwide) project.
Biography
Hyung-Soon Park received BS (94), MS(96), and Ph.D(04). degree in mechanical engineering all from KAIST, Daejeon, Korea.
His major was telerobotic control in the field of mechanical engineering. He has been applying robotics and control technology for enhancing rehabilitation medicine since he got his PhD.

He was a research scientist at Rehabilitation Institute of Chicago from 2004 to 2009. From 2009 to 2013, he was a staff scientist with Rehabilitation Medicine Department at National Institutes of Health, Bethesda, MD. He joined KAIST in 2013 and he is now a professor in the Mechanical Engineering Department, Korea Advanced Institute of Science and Technology, Daejeon, Korea. He is directing research centers for future healthcare systems including KAIST global singularity project: RENEW (Rehabilitation Engineering for Neurological disorders Worldwide) and KAIST-CERAGEM center for future healthcare technology. His current research interest focuses mainly on optimizing robotics application for promoting brain plasticity in neuro-rehabilitation.

Kensuke Harada
Osaka University, Japan
 
Robotic Manipulation Research Aiming for Industrial Applications
Abstract
In this talk, we present our recent progress on robotic manipulation research aiming for industrial applications.
We first explain why the automation of the high-mix and low-volume production is difficult. Then, we introduce several aspects of research mainly done by our research group, including task/motion planning, motion control, machine learning and gripper design.

Biography
Prof. Kensuke Harada received Ph.D. from Graduate School of Mechanical Engineering, Kyoto University in 1997. In 1997-2002, he worked as a research Associate at Department of Engineering, Hiroshima University. In 2002-2016, he worked at Intelligent Systems Research Institute, National Institute of Advanced Industrial Science and Technology (AIST). Since 2016, he has been a professor at the Graduate School of Engineering Science, Osaka University. He is also the director, Daifuku Logistics Automation Tech Collaborate Research Institute of Osaka University, Cross Appointment Fellow of National Institute of Advanced Industrial Science and Technology (AIST), CPO of Wing Robotics Corp. He is fellow of JSME (Japan Society of Mechanical Engineers) and Senior Member of IEEE. His research interest includes motion planning, machine learning, human motion analysis, motion control robotic hands/gripper in robotic manipulation research.

Jinah Jang
POSTECH, Korea
 
Bioprinted Human Tissues for Advanced Therapeutics
Abstract
Recent advances in biofabrication techniques have allowed for the fabrication of cardiac tissue models that are similar to the human heart in terms of their structure (e.g., volumetric scale and anatomy) and function (e.g., contractile and electrical properties). The importance of developing techniques for assessing the characteristics of 3D cardiac substitutes in real time without damaging their structures has also been emphasized. In particular, the heart has two primary mechanisms for transporting blood through the body: contractility and an electrical system based on intra- and extracellular calcium ion exchange. This talk will discuss how 3D cardiovascular tissue testing platform could be generated by integrating the concept of bioprinting-assisted tissue engineering and electrical sensing platforms. Combined with recent advances in human pluripotent stem cell technologies, printed human tissues could serve as an enabling platform for studying complex physiology in tissue and organ contexts of individuals.

Biography
Dr. Jinah Jang received her PhD at Pohang University of Science and Technology (POSTECH) in Korea, and trained as postdoctoral fellow in POSTECH and Institute for Stem Cell and Regenerative Medicine at University of Washington. She has joined the POSTECH in 2017 and now an Associate Professor in the Convergence IT Engineering, Mechanical Engineering, and School of Interdisciplinary Bioscience and Bioengineering. She has published more than 95 peer-reviewed articles in prestigious journals in the are of bioprinting and tissue engineering. Her h-index and citations are 40 and more than 6858, respectively (by Google Scholar). She currently serves as the Associate Editor of Bio-Design and Manufacturing and as a board of directors for International Society for Biofabrication. She also has received numerous awards including the SME 2022 Sandra L. Bouckley Outstanding Young Engineer Award (2022), and Korea Tissue Engineering and Regenerative Medicine Society (2021). Her research interest lies in engineering the functional human tissues using high-performance stem cells and printable biomaterials-based 3D bioprinting technology.

Sang-Youn Kim
KoreaTech, Korea
 
Soft Haptic Actuators and Sensors for Human Robot Interaction
Abstract
The term ‘haptic’ is a word that is related to kinesthetic or tactile sensation. Kinesthetic and tactile information refers to sensory data obtained through receptors of joints, muscles, ligaments, and etc, and through receptors of skin, respectively. A user recognizes the stiffness of an object through the kinesthetic information and discerns the texture of an object through the tactile information. Therefore, a user can communicate and/or interact with a robot efficiently by adding haptic information to auditory and visual information. Robots and their interfaces are under rapid shift from rigid to flexible and soft modules. Researchers are developing even shape changing interfaces, which can provide better affordance to users. Since such interface and module can have diverse shapes, the currently available rigid actuators/sensors is not very appropriate to provide/sense tactile feedback, and we will need the tactile actuators/sensors that have excellent shape conformity. Soft actuators/sensors are one of the best candidates for that purpose. This talk addresses the best-established technologies for soft haptic actuators and sensors.

Biography
Sang-Youn Kim received the BS degree from Korea University, Korea, in 1994, and the MS and PhD degrees at the Korea Advanced Institute of Science and Technology (KAIST), in 1996 and 2004, respectively. From 2004 to 2005, he was a researcher in the Human Welfare Robot System Research Center. In 2005, he was a research staff at the Samsung Advanced Institute of Technology. He is a professor of computer science and engineering at Korea University of Technology and Education and also a director of the Advanced Technology Research Center. His current research interests include human-robot interaction, virtual reality, and haptics. He received Best Demo Award at IEEE world haptic conference 2013. He also received commendations from the Minister of Knowledge and Economy 2013, the Minister of Education 2018, and the Minister of Science and Technology 2022.

Min-hwan Oh
Seoul National University
 
Randomized Exploration in Structured Reinforcement Learning
Abstract
Recent years have witnessed increasing empirical successes in reinforcement learning (RL). Yet, we still have fundamental questions that are not well understood in RL. For example, how many observations are required for a decision-making RL agent to learn how to act optimally? How can the agent explore efficiently in feature space? Common approaches to exploration are highly inefficient. I will discuss how this can be addressed with a structured Markov decision process and randomized exploration, which enables efficient exploration and learning in feature space and allows RL to be data-efficient and practical.

Biography
Min-hwan Oh is an Assistant Professor in the Graduate School of Data Science at Seoul National University. He received a B.A. in Mathematics-Statistics with honors and received a Ph.D. in Operations Research with Data Science specialization at Columbia University, where he was advised by Prof. Garud Iyengar and co-advised by Prof. Assaf Zeevi. His doctoral thesis was recognized as a finalist for the George B. Dantzig Dissertation Award at INFORMS in 2020. His primary research interests are in sequential decision-making under uncertainty, reinforcement learning, bandit algorithms, statistical machine learning, and their applications.

Guy Williams
University of Tasmania, Australia
 
The new golden era of polar research and exploration with autonomous systems
Abstract
As climate change becomes humanity’s greatest existential crisis, the race is on to convince the world to take urgent action. A key factor in motivating this action is a clear understanding of how the global climate system works and specifically our ability to monitor its current state and model its future. Polar regions are at the same time critical components of this system and some of the most poorly observed, due to the extreme polar conditions and challenging logistics. This has resulted in persistent data and knowledge gaps – gaps that have existed for decades and continue to negatively impact climate action through the uncertainty they inadvertently feed. Robots, or autonomous systems, present more than just an exciting and innovative ‘tool’ for Antarctic science – they are quite possibly the only chance we have to fill these critical gaps and improve the accuracy of vital metrics such as the impact of polar melting on global sea-level rise. But simply adding robots to polar research expeditions doesn’t make all your problems go away – if anything, your challenges will increase. Nonetheless, a new golden era of polar exploration awaits those groups with the courage and expertise to utilise autonomous systems in this high risk/high reward research field. In this talk I’ll discuss the great progress made this century with polar autonomous systems, some of the lessons learnt and how we need to work together as scientists and engineers, within and across national borders, to observe, monitor and protect our precious polar regions.

Biography
Associate Professor Guy Williams is the Academic Lead of the Autonomous Maritime Systems Laboratory at the University of Tasmania. Guy trained in Aeronautical Engineering at the University of Sydney, graduating in 1995. He then spent several years focusing on software development and worked in the fledgling internet industry for Compuserve Pacific from 1996-1997. A love of mountaineering led him to change life directions and move to Tasmania to pursue glaciology, gaining his PhD from the University of Tasmania in 2004 on the topic of Antarctic ocean/sea-ice/ice shelf interactions. Guy spent the next 15 years developing and deploying new methodologies of unmanned aerial systems, underwater vehicles and instrumented seals in the Antarctic, the Arctic and the Sea of Okhotsk, spending over 2 years at sea on international research expeditions. Today he leads a University AUV facility which works across research, industry and defence, whilst also acting as a Consulting Director to commercial aerial inspection and survey company and an AUV Training Consultant to the Australian Defence Force through the Australian Maritime College

Toshio FUKUDA
Nagoya University and Waseda University, Japan
 
AI Robots and Moon Shot Program
Abstract
There are many ways to make research and development of intelligent robotic systems. I have been working on the Multi-scale robotics systems for many years, based on the Cellular Robotics System, which is the basic concept of the emergence of intelligence in the multi-scale way from Organizational Level, Distributed robotics to Biological Cell engineering and Nano-robotics with self-reconfigurability. It consists of many elements how the system can be structured from the individual to the group/society levels in analogy with the biological system.
Focusing on the coevolution and self organization capabilities, I will show a new initiative on AI and Robot, one of the Moon Shot Programs started by Japanese Government, since 2020. Based on the Society 5.0, it is a new and challenging program aiming at the AI robotic system in 2050. I will introduce some of the projects in this program for realization of the Society 5.0 by back-casting technologies from the 2050 to the current ones.

Biography
Toshio Fukuda is Professor Emeritus of Nagoya University and Professor of Meijo University and Waseda University. He is mainly engaging in the research fields of intelligent robotic system, micro and nano robotics, bio-robotic system and industry applications in robotics and automation. He was the President of IEEE Robotics and Automation Society (1998-1999), and IEEE President (2020). He was Editor-in-Chief of IEEE/ASME Trans. Mechatronics (2000-2002). He was chairs of many conferences, such as the Founding General Chair of IEEE International Conference on Intelligent Robots and Systems (IROS, 1988), IEEE Conference on Cyborg and Bionic Systems (CBS, 2017), IEEE Conference on Intelligence and Safety of Robots (ISR, 2018). He has received many awards such as IEEE Robotics and Automation Pioneer Award (2004), IEEE Robotics and Automation Technical Field Award (2010). IEEE Fellow (1995). SICE Fellow (1995). JSME Fellow (2002), RSJ Fellow (2004), VRSJ Fellow (2011).

Jiyoun Moon
Chosun University
 
Semantic scene understanding based human-robot cooperation
Abstract
Human-robot cooperation is unavoidable in various applications ranging from manufacturing to field robotics owing to the advantages of adaptability and high flexibility. Especially, complex task planning in large, unconstructed, and uncertain environments can employ the complementary capabilities of human and diverse robots. For a team to be effectives, knowledge regarding team goals and current situation needs to be effectively shared as they affect decision making. In this respect, semantic scene understanding in natural language is one of the most fundamental components for information sharing between humans and heterogeneous robots, as robots can perceive the surrounding environment in a form that both humans and other robots can understand. In this presentation, semantic scene understanding based human-robot cooperation is introduced.
Biography
Jiyoun Moon received the B.S degree in Robotics from Kwangwoon University in 2014, and the Ph.D. degree in Electrical and Computer Engineering from Seoul National University in 2020. She is currently an assistant professor at Chosun University. Her research interests include artificial general intelligence, cognitive robotics, and human-robot cooperation.
Seokju Lee
Korea Institute of Energy Technology (KENTECH), Korea
 
Computer Vision Meets Energy AI: From Autonomous Driving to Carbon Neutrality
Abstract
Computer vision is indispensable for existing/emerging AI industries, and its development is accelerated by the advances in machine learning. In particular, autonomous driving technology is the culmination of these studies. Recently, automobiles are facing another turning point in this era of electric vehicles. In this presentation, I will present the future of automobiles as autonomous energy mobility and my vision at KENTECH.

Contents:
In this session, I will present the following stuff.
– Summary of my Ph.D. research and its perspective
– My vision at KENTECH
– Social contributions
Biography
Seokju Lee received the B.S degree in Electronics and Computer Engineering from Ulsan National Institute of Science and Technology (UNIST) in 2013, and the M.S. and Ph.D. degree in Robotics from Korea Advanced Institute of Science and Technology (KAIST) in 2015 and 2021, respectively. He is currently an assistant professor at Korea Institute of Energy Technology (KENTECH). His research interests include computer vision and AI for autonomous driving.

Jeong-Jung Kim
Korea Institute of Machinery & Materials, Korea
 
AI for Mobile Manipulation in Unstructured Environments
Abstract
A mobile manipulator is a robot system combined with a manipulator on a mobile robot. Since the system has a high degree of freedom and a wide workspace, it can be used for various task in typical human environments. Until now, the system is mainly used for simple tasks, and in order to be used in unstructured environment, it should be able to handle the manipulation of various objects and navigation in narrow passages. In this presentation, I will show how these problems are addressed with AI for the mobile manipulation.
Biography
Jeong-Jung Kim received the B.S degree in Electronics and Information Engineering from Chonbuk National University in 2006, and the M.S. degree in Robotics and the Ph.D. degree in Electrical Engineering from Korea Advanced Institute of Science and Technology (KAIST) in 2008 and 2015, respectively. From 2015 to 2017, he was a post-doctoral researcher at Robotics and Media Institute in Korea Institute of Science and Technology (KIST). He is currently a senior researcher at Korea Institute of Machinery & Materials (KIMM). His research interests include AI for manipulation and navigation in robotics.
Myunghee Kim
University of Illinois at Chicago, USA
 
Human-Wearable robot Co-adaptation
Abstract
Reduced mobility is a significant societal problem. In 2010, 30.6 million adults had ambulatory limitations, and 23.9 million individuals found it difficult to walk one-quarter mile, only in US alone. This reduced mobility is related to increased medical problems and can have detrimental socio-economical impacts. The problem will only increase with an aging population. The drive to discover effective strategies for human gait assistance has led to investigations into human-wearable robots. In response, my research strives to advance the field by focusing on wearable robots that can respond to individual users, resulting in a smart assistance strategy in which the robot adapts to the human wearers. Furthermore, my research includes the concept of co-adaptation, in which the human users receive guidance on how to adjust their movement patterns to optimize their benefit from the personalized robot. In this talk, I will introduce a robot adaption method to a user, human-in-the-loop (HIL) optimization, and the user guidance method to facilitate robot use. The HIL optimization is a machine learning approach using biofeedback, which significantly reduced walking and squatting efforts when users wore various wearable-robots such as hip and ankle soft exosuit, ankle-foot orthosis, and ankle-foot prosthesis. My group also found that user guidance via visual feedback can improve the wearable robot use, even for an unfavorable condition – initially increased the cost of walking. The seminar will be concluded with a discussion about the challenges and opportunities offered by the human-in-the-loop assistance controller and user guidance method.

Biography
About Dr. Myunghee Kim: Dr. Kim is as an assistant professor at UIC Department of Mechanical and Industrial Engineering. Her primary research focus is the development of assistive robotic devices for improving mobility and quality of life through integrative approaches of numerical dynamic models, machine learning techniques, experimental testbeds, and controlled human-subject experiments. She received her M.S. degrees from Korea Advanced Institute of Science (KAIST) and Technology and Massachusetts Institute of Technology (MIT), a Ph.D. degree from Carnegie Mellon University and held a post-doctoral appointment at Harvard University. She was a control engineer at Samsung. She received Best Paper Award in the Medical Devices category at International Conference on Robotics and Automation (ICRA), 2015. Recently, her group work on a data-driven model estimation method resulted in the Best Poster Award at the US-Korea Conference (UKC) 2019. Her research and education have received funding from a federal source (National Science Foundation (NSF) with National Institute for Occupational Safety and Health (NIOSH)), a defense agency (Army Research Lab), an internal fund for global collaboration (UIC-TEC), a research institute (Korea Institute of Robotics & Technology Convergence), a consortium of companies (Office Ergonomics Research Committee), and private company (LIG Nex1).

Inwon Jong
Fren Inc. Chief Design Officer, USA
We design healing environments by bringing purposeful stories to life.
Abstract
We identify ways to personalize experiences and optimize services by transforming our understanding into creative storytelling environments. We also utilize co-creation to develop guided storytelling through user experience and designed touchpoints. The outcome presents a healing environment that incorporates educational and therapeutic activities designed to engage and support inpatient care with extraordinary experiences. Our process behind a project combines service and business design capabilities. In addition to the usual customer research, we make an effort at an early stage of the project, to understand the business as well as the organization to create boundaries for the project. The deliverables are designed to serve the customers and users, the business, and the organization.
Biography
Inwon Jong is a Savannah, Georgia-based designer having affinities with design and technology. His work relies on the bridge between the areas of digital aesthetics and user experiences which explore the expressive use of a computational method. Inwon has served as an accomplished product designer in the field of Information Technology since 2005. Over 10 years of his design career was spent with Samsung Electronics, which had a clear and direct impact on the interaction software utilized by mobile phones and worked with a diverse range of co-workers across multiple mobile platforms. He is currently a Chief Design Officer at Frendesign where he focuses on designing and developing programs that yield digital artifacts and environments for therapeutic installations that directly improve children’s patient experiences.
Deokjin Lee
Jeonbuk National University, Korea
 
Model-free vs Model-based Reinforcement Learning to Control for Robotics
Abstract
After AlphaGo vs Lee Sedol in 2016, known as a surprise 4-1 victory of the Google DeepMind Challenge Match in Seoul, South Korea, the topic of reinforcement learning (RL) has received huge attention for this breakthrough while leading to a new level of artificial intelligence. In general, reinforcement learning methods fall into two categories, model-based method and model-free method, and each of which shows unique advantages. Even though the rich theoretical foundation of model-free deep reinforcement learning (DRL) and their various application tasks, RL requires many data samples to find optimal policies to realize good performance, and explore all possible actions which may be subject to the unstable safety issue. While, model-based RL can quickly obtain near-optimal control by learning the model in a rather limited class of dynamics. However, its disadvantages lie on the fact that most model-based algorithms learn local models over-fitting several samples by depending on simple functional approximators. In this talk, we are going to compare the RL algorithms using in both model-based and model-free approaches with a realistic robot control applications, such as drone stabilization and quadruped robot control, investigating how RL agents generalize to real-world autonomous robot control-tasks. In addition, a new hybrid approach which integrates the model-based and model-free RLs is discussed with a robot control application.
Biography
received the Ph.D. & MS degrees in Aerospace Engineering from Texas A&M University in May 2005. BS in Mechanical Aerospace Systems from Jeonbuk National University in 1996.
He worked for Agency for Defense Development (ADD) from 2006 to 2007, and from 2009 to 2011 he was also a research professor at the Center for Autonomous Vehicle Research (CAVR), Naval Postgraduate School, Monterey, CA, U.S.A. Currently, he is an associate professor at the School of Mechanical Design Engineering at Jeonbuk National University, also a director of the center of Autonomous Intelligence e-Mobility(CAIM).
IEEE Senior Member (2015~), AIAA Senior Member(2010~), Texas A&M University Excellent Research Fellowship (2005), The John V. Breakwell Student Travel Award(AAS, 2003), President of KSME Artificial Intelligence Research Society(2020~2021), Institute of Control, Robotics & Systems(ICROS), Best Paper AIAA(2008), Best Paper KSME (2016, 2017), etc.

Min Jun Kim
Southern Methodist University, USA
Magnetically Actuated Modular Robots for Self-Assembling and Additive Manufacturing
Abstract
Magnetically actuated modular robots can be controlled remotely by external magnetic fields, making them promising candidates for biomedical and engineering applications. This talk will introduce an innovative reconfigurable modular robotic system which controls miniature components that can be actively assembled and disassembled on command. This type of system could potentially improve the robustness and control-lability of small-scale additive manufacturing. The base components are miniature cu-bes that contain permanent magnets. They are actuated using an external magnetic field generated via a three axis Helmholtz coil system. The cubes can achieve different motion patterns such as such as pivot walking, tapping, and tumbling. Our project in-volves designing and fabricating scalable modular subunits using 3D printing. A set of design rules for the cubes has been defined. Algorithms to control the magnetic subu-nits have been studied. The issues addressed by this talk are at the interface of small-scale robotics, control theory, materials science, and bioengineering, and hold exciting prospects for fundamental research with the potential for diverse applications.
 
Biography
Dr. MinJun Kim is presently the Robert C. Womack Endowed Chair Professor of En-gineering at the Department of Mechanical Engineering, Southern Methodist Universi-ty. He received his B.S. and M.S. degrees in Mechanical Engineering from Yonsei University in Korea and Texas A&M University, respectively. Dr. Kim completed his Ph.D. degree in Engineering at Brown University, where he held the prestigious Si-mon Ostrach Fellowship. Following his graduate studies, Dr. Kim was a postdoctoral research fellow at the Rowland Institute in Harvard University. He joined Drexel University in 2006 as Assistant Professor and was later promoted to Professor of Me-chanical Engineering and Mechanics. Dr. Kim has been exploring biologically inspired sensing and actuation to develop new types of nano/micro robotics. His notable awards include the National Science Foundation CAREER Award (2008), Drexel Ca-reer Development Award (2008), Human Frontier Science Program Young Investiga-tor Award (2009), Army Research Office Young Investigator Award (2010), Alexan-der von Humboldt Fellowship (2011), KOFST Brain Pool Fellowship (2013 & 2015), Bionic Engineering Outstanding Contribution Award (2013), Louis & Bessie Stein Fellowship (2008 & 2014), ISBE Fellow (2014), ASME Fellow (2014), Top10 UNESCO-Netexplo Award (2016), KSEA & KOFST Engineer of the Year Award (2016), IEEE Senior Member (2017), Sam Taylor Fellowship (2018), Gerald J. Ford Research Fellowship (2018), and Protégé of the Academy of Medicine, Engineering and Science of Texas (2019).