
Tutorial : On Aspects of Virtual Reality
*****Friday 31st October 2025***** 9:30 AM – 11:30 AM ******
The IEEE IEMCON 2025 VR-AR Tutorial will be presented by Dr. Phillip Bradford (University of Connecticut, Stamford, USA).
Phillip Bradford
(University of Connecticut, Stamford, USA)
Bio: Dr. Phillip G. Bradford is on the faculty at the University of Connecticut. He is the director of the computer science program at the University of Connecticut in Stamford.
He is a computer scientist with extensive experience in academia and industry. Phil was a post-doctoral fellow at the Max-Planck-Institut für Informatik, he earned his PhD at Indiana University, an MS form the University of Kansas, and a BA from Rutgers University. He was on the faculty at Rutgers Business School and the University of Alabama’s Engineering School. He worked for BlackRock, Reuters Analytics, founded a startup and worked with a number of early stage firms. He was a Principal Architect for General Electric. Phil has a deep belief in bringing real research to practice. This is the root of his entrepreneurial perspective. Phil has a handful of best-in-class results. His Erdős Number is 2. He has given over 70 talks in 10 countries and he is the author or co-author of over 70 articles.
Registration:
Category | Registration fees |
IEEE Student and IEEE Life Member | $0 ($375/$475) |
General | $25 [ or $30 for both tutorials] ($750- $755/$850- $855) |
IEEE Member | $25 [ or $30 for both tutorials]($650- $655/$750- $755) |
All fees are in US Dollars and include all applicable taxes.
Kindly note that the fee shown for tutorials ($25 – $30) refers only to the tutorial charges. Authors are required to pay the full registration fee separately, which includes paper registration. Attendees who are not presenting papers must also register under the attendee registration category.
Goal
To gain basic VR skills using A-Frame and related systems. This includes building basic visual effects for VR and basic animations. This will work for Android and iOS phones using small VR headsets.
You must have a laptop. For the last part we will use a Ubuntu VM.
Background
This Tutorial uses small plastic VR headsets.
We will start with web-browser VR – such as webxr-api-emulator from the Mozilla Development Foundation
You can do everything here on your own machine. We will use a basic Ubuntu VM for the animation topic.
Topics and Tools
Tools | Topics / goals | Time | Exercises |
Introduction | Overview – outline goals Goal: Setting up google cardboard with glitch.com system | 10 minutes | Get VR headset working with glitch. Change glitch images and see updates in google cardboard |
A-frame basics | Simple 3D a-frame examples Goal: Work with glitch.com and google cardboard | 15 minutes | Fast moving exercise 1 and 2 |
Foundations | Use JavaScript, DOM, events, Web-Components | 15 minutes | Basic components exercise and some JavaScript timers for updating a-frame views |
three.JS components | Goal: work with basics of geometries, materials, lights, models | 15 minutes | ThreeJS examples 1 Exercises for basic geometries, materials, views |
A-frame / three.JS components | Goal: work with, models, shadows, and controls Integrate A-Frame and three.js | 15 minutes | ThreeJS examples 2 with A-Frame Examples of models, shadows, and controls |
Entity component architecture (ECA) | Goal: use three.js and ECA over standard OO paradigm – giving a-frame | 15 minutes | JavaScript OO vs ECA Example showing difficulty of OO but easier in ECA |
A-frame and planets | Complex 3D a-frame example Goal: work with complex a-frame detail and basic planetary math; illustrate ECA, geometries, controls, etc. | 20 minutes | Start with Three.JS planet example Migrate to a-frame using ECA Examine changes in the math and its immediate impact in the google cardboard |
A-frame and animations | Goal: show how to do basic animation | 20 minutes | Example of basic animation |
Conclusion | Goal: review our learning | 5 minutes |
Tutorial : Applications on Generative AI
*****Thursday 30th October 2025 ***** 02:00 PM -04.00 PM **
The IEEE IEMCON 2025 GenAI Tutorial will be presented by Dr. Sudipta Sahana (University of Engineering and Management, Kolkata).
Sudipta Sahana
(University of Engineering and Management, Kolkata)
Bio:
Dr. Sudipta Sahana is a distinguished academician and researcher, currently serving as a Professor in the Department of Computer Science and Engineering (Artificial Intelligence & Machine Learning) at the University of Engineering and Management (UEM), Kolkata, India. With a robust academic background and an impressive track record in both teaching and research, Dr. Sahana has carved a niche for himself in the domains of Cloud Computing and Machine Learning. He has completed Bachelor of Technology (B.Tech) and Master of Technology (M.Tech) in Computer Science and Engineering, both obtained from the West Bengal University of Technology (WBUT). His unwavering passion for advanced research and academic excellence led him to pursue a Doctor of Philosophy (Ph.D.) degree in Computer Science and Engineering from the University of Kalyani. His doctoral work laid a strong foundation for his ongoing research in emerging areas of computer science.
As a prolific researcher, Dr. Sahana has made substantial contributions to the global scientific community. He has authored and co-authored more than 95 research papers, which have been published in reputed international journals and presented at prestigious conferences. His publications cover a wide range of topics, including but not limited to, optimization algorithms in cloud computing, data analytics, intelligent systems, and the application of machine learning techniques in real-world scenarios. Furthermore, he has contributed chapters to several edited academic volumes, further reflecting the depth and breadth of his research expertise. His research interests focus on Cloud computing and machine learning. He is a life member of CSI and a fellow member of IETE.
Registration:
Category | Registration fees |
IEEE Student and IEEE Life Member | $0 ($375/$475) |
General | $25 [ or $30 for both tutorials] ($750- $755/$850- $855) |
IEEE Member | $25 [ or $30 for both tutorials]($650- $655/$750- $755) |
All fees are in US Dollars and include all applicable taxes.
Kindly note that the fee shown for tutorials ($25 – $30) refers only to the tutorial charges. Authors are required to pay the full registration fee separately, which includes paper registration. Attendees who are not presenting papers must also register under the attendee registration category.
Goal
Introduce participants to Generative AI and its applications in text, image, and creative content generation.
Explain key concepts of machine learning and different generative models (GANs, VAEs, autoregressive).
Provide hands-on practice in text generation using GPT-based models.
Explore image generation with tools like DALL·E and Stable Diffusion.
Demonstrate no-code generative AI app building through AWS PartyRock for quick prototyping.
Discuss ethical challenges, including deepfakes, copyright, and responsible AI practices.
Inspire participants to apply Generative AI in projects, research, and industry contexts.
Background
This Tutorial uses Google Colab and AWS PartyRock.
We will start with text generation – such as GPT-based models like ChatGPT.
You can do everything here on your own machine with internet access.
We will also explore image generation with DALL·E, Stable Diffusion, and build simple AI apps using AWS PartyRock.
Topics and Tools
Tools | Topics / Goals | Time | Exercises |
Slides + Examples | Introduction to Generative AI: Definition, differences from traditional AI, applications (content, art, music, data) | 15 mins | Discussion on real-world applications |
Slides + Examples | Key Concepts: ML basics, types of generative models (GANs, VAEs, autoregressive), training process, UEM deployed project | 15 mins | Q&A on model types and training |
Google Colab + GPT-3 / ChatGPT | Hands-On Text Generation: Setup, demo, participants generate text using prompts | 30 mins | Prompt-based text generation exercise |
Google Colab + DALL·E / Stable Diffusion | Exploring Image Generation: GANs, DALL·E, Stable Diffusion; demo and activity | 25 mins | Image generation from text descriptions |
AWS PartyRock (Generative AI Playground) | Exploring Generative AI with AWS PartyRock: Hands-on experience with text, image, and chatbot generation using a no-code environment. Showcase how to build and deploy simple generative AI apps quickly. | 25 mins | Create and experiment with generative AI apps in PartyRock; design simple text/image/chatbot applications |
Slides + Case Studies | Ethical Considerations: Deepfakes, copyright issues, responsible AI use | 10 mins | Discussion on ethical implications and guidelines |
Important Deadlines
Full Paper Submission: | 19th September 2025 |
Acceptance Notification: | 25th September 2025 |
Final Paper Submission: | 12th October 2025 |
Early Bird Registration: | 9th October 2025 |
Presentation Submission: | 19th October 2025 |
Conference: | 29-31 October 2025 |
Previous Conference-
Sister Conferences-
Announcements-
- Best Paper award will be given for each track.
- Conference Record Number - 67450