Home
For international students
Just Look into the Camera: AI Allows You to Try on Glasses Without Visiting an Optician
- Programmes in English 2026/2027
- Admission 2026/2027 Scholarships
- For exchange students
- Free Movers
- Transfer studies
- Erasmus+ studies and traineeships
- Mentor programme
- Student testimonials
- Accommodation
- Career Services
- Medical Care
- Immigration Regulations
- Leisure and Student Activities
- Useful information
- Mental and spiritual support
- Representatives Abroad
- Contacts
2025-11-25
Just Look into the Camera: AI Allows You to Try on Glasses Without Visiting an Optician
Imagine that to try on glasses, all you need to do is look into your phone’s camera. The system recognises your face, determines its scale, “removes” the glasses you are currently wearing, measures all the necessary parameters and instantly recommends frames and lenses that fit you perfectly. This is not a distant future but the direction in which artificial intelligence and 3D technologies are rapidly progressing, transforming our shopping habits.
Virtual try-on technology for eyewear, which became popular during the pandemic, is now an integral part of modern optical retail. Global brands such as Mister Spex already use it, and its developers – for example, Fittingbox – claim that virtual try-ons can triple sales.
Still, even if you find frames you like in an online shop, one question often remains: will the size actually fit?
“The accuracy of virtual frame fitting completely depends on correct scale estimation,” explains Dr Artūras Serackis, a researcher involved in the SustAInLivWork project that is establishing the Artificial Intelligence Competence Centre, and a professor at Vilnius Gediminas Technical University. “For the frames to appear precisely positioned on the face in a virtual environment, the system must know not only the scale of the glasses but also the scale of the person’s face in the image. This is one of the most complex technological challenges.”
According to him, the same issue arises when trying to accurately measure parameters such as pupillary distance from a photo. In such cases, a reference object with known dimensions is required.
“For instance, Fittingbox uses standard-sized bank cards. The user briefly places a card next to their face, and the system determines the scale based on it. But is this practical today, when most payments are made via smartphone and many people do not carry physical cards anymore?” notes the researcher.
Trying on Glasses Without Removing Your Own
Dr Serackis and his colleagues are currently focusing on advanced facial-analysis technologies capable of automatically assessing image quality, facial orientation and measurement accuracy.
“For example, it is crucial that the face in the photo is parallel to the camera axis – otherwise measurement errors can increase significantly,” he stresses. AI models identify key facial points and analyse their positions to determine the exact head orientation.
Although many AI models can recognise up to 468 facial landmarks, the number alone does not guarantee accurate measurements. High-precision results require highly reliable methods, including assessment of lighting direction, type and intensity, as well as other factors that influence image quality.
“Consider a situation where someone wants to try on glasses while already wearing a pair – which likely happens in over 98 percent of cases. It is far more convenient to use the service without taking off your own glasses. In such cases, we need another class of AI models that can detect glasses and ‘remove’ them from the image,” explains Dr Serackis.
Generative AI can remove glasses from a photo, but this introduces the risk of distortions, reducing measurement accuracy. This challenge has sparked a new research direction, requiring advanced computing infrastructure and specialised AI training servers.
The Biggest Challenge: Cost of Preparing 3D Models
Once the system accurately estimates scale, extracts the necessary facial parameters and “removes” existing glasses, the next step is virtual try-on of different frames – including size adjustment. This requires a 3D model of each frame design.
“Eyewear manufacturers do not publish their 3D models, so obtaining them directly is impossible. To have a 3D model, we must create it ourselves: either by manually modelling it in specialised software, which is time-consuming and requires skilled specialists, or by physically scanning the frame and post-processing the model. The third option is to use AI models capable of generating 3D frame models automatically based on product images from online shops,” he says.
The main challenge here is the cost of producing 3D models. Frame collections are updated at least twice a year, and hundreds of new designs appear annually. If each model had to be created manually, virtual try-on services would become financially impractical or offer only a limited selection.
“At first glance, there seem to be plenty of 3D scanning solutions available, but in practice they have significant limitations. High-resolution scanners designed for jewellery capture too small an area – glasses simply don’t fit. Meanwhile, handheld scanners designed for larger objects struggle to reproduce thin frame elements accurately,” explains the SustAInLivWork researcher.
Towards a Next-Generation Measurement System
According to Dr Serackis, the arrival of NVIDIA technologies such as NeRF and later advanced alternatives like One-2-3-45++ has allowed AI to predict the full shape of an object from a single image and generate 3D models. However, for eyewear, such models are not yet specialised enough – they often produce asymmetries or distorted proportions. Moreover, most are not designed to work with multiple images, though this capability is essential in this field.
“That’s why we are developing alternative, original methods: step by step we analyse the classical 3D modelling process and look for ways to automate each stage using AI. Although we have not yet reached a breakthrough, we are working with leading AI and computer-vision experts in Lithuania – within the SustAInLivWork centre and partner institutions at Vilnius Gediminas Technical University. With powerful computing resources for AI training, we are confident that in the coming years we will develop a highly advanced solution for working with eyewear frames,” he says.
SustAInLivWork is the first competence centre of its kind in Lithuania, systematically consolidating knowledge and skills in AI. It brings together four leading Lithuanian universities – Kaunas University of Technology, Vytautas Magnus University, Lithuanian University of Health Sciences, and Vilnius Gediminas Technical University – in partnership with Tampere University (Finland) and Hamburg University of Technology (Germany).
It is a long-term, cross-sectoral platform connecting science, business, the public sector, and society.
The SustAInLivWork project is funded by the Horizon Europe programme (No. 101059903) and by the European Union funds for 2021–2027 (Project No. 10-042-P-0001).
Virtual try-on technology for eyewear, which became popular during the pandemic, is now an integral part of modern optical retail. Global brands such as Mister Spex already use it, and its developers – for example, Fittingbox – claim that virtual try-ons can triple sales.
Still, even if you find frames you like in an online shop, one question often remains: will the size actually fit?
“The accuracy of virtual frame fitting completely depends on correct scale estimation,” explains Dr Artūras Serackis, a researcher involved in the SustAInLivWork project that is establishing the Artificial Intelligence Competence Centre, and a professor at Vilnius Gediminas Technical University. “For the frames to appear precisely positioned on the face in a virtual environment, the system must know not only the scale of the glasses but also the scale of the person’s face in the image. This is one of the most complex technological challenges.”
According to him, the same issue arises when trying to accurately measure parameters such as pupillary distance from a photo. In such cases, a reference object with known dimensions is required.
“For instance, Fittingbox uses standard-sized bank cards. The user briefly places a card next to their face, and the system determines the scale based on it. But is this practical today, when most payments are made via smartphone and many people do not carry physical cards anymore?” notes the researcher.
Trying on Glasses Without Removing Your Own
Dr Serackis and his colleagues are currently focusing on advanced facial-analysis technologies capable of automatically assessing image quality, facial orientation and measurement accuracy.
“For example, it is crucial that the face in the photo is parallel to the camera axis – otherwise measurement errors can increase significantly,” he stresses. AI models identify key facial points and analyse their positions to determine the exact head orientation.
Although many AI models can recognise up to 468 facial landmarks, the number alone does not guarantee accurate measurements. High-precision results require highly reliable methods, including assessment of lighting direction, type and intensity, as well as other factors that influence image quality.
“Consider a situation where someone wants to try on glasses while already wearing a pair – which likely happens in over 98 percent of cases. It is far more convenient to use the service without taking off your own glasses. In such cases, we need another class of AI models that can detect glasses and ‘remove’ them from the image,” explains Dr Serackis.
Generative AI can remove glasses from a photo, but this introduces the risk of distortions, reducing measurement accuracy. This challenge has sparked a new research direction, requiring advanced computing infrastructure and specialised AI training servers.
The Biggest Challenge: Cost of Preparing 3D Models
Once the system accurately estimates scale, extracts the necessary facial parameters and “removes” existing glasses, the next step is virtual try-on of different frames – including size adjustment. This requires a 3D model of each frame design.
“Eyewear manufacturers do not publish their 3D models, so obtaining them directly is impossible. To have a 3D model, we must create it ourselves: either by manually modelling it in specialised software, which is time-consuming and requires skilled specialists, or by physically scanning the frame and post-processing the model. The third option is to use AI models capable of generating 3D frame models automatically based on product images from online shops,” he says.
The main challenge here is the cost of producing 3D models. Frame collections are updated at least twice a year, and hundreds of new designs appear annually. If each model had to be created manually, virtual try-on services would become financially impractical or offer only a limited selection.
“At first glance, there seem to be plenty of 3D scanning solutions available, but in practice they have significant limitations. High-resolution scanners designed for jewellery capture too small an area – glasses simply don’t fit. Meanwhile, handheld scanners designed for larger objects struggle to reproduce thin frame elements accurately,” explains the SustAInLivWork researcher.
Towards a Next-Generation Measurement System
According to Dr Serackis, the arrival of NVIDIA technologies such as NeRF and later advanced alternatives like One-2-3-45++ has allowed AI to predict the full shape of an object from a single image and generate 3D models. However, for eyewear, such models are not yet specialised enough – they often produce asymmetries or distorted proportions. Moreover, most are not designed to work with multiple images, though this capability is essential in this field.
“That’s why we are developing alternative, original methods: step by step we analyse the classical 3D modelling process and look for ways to automate each stage using AI. Although we have not yet reached a breakthrough, we are working with leading AI and computer-vision experts in Lithuania – within the SustAInLivWork centre and partner institutions at Vilnius Gediminas Technical University. With powerful computing resources for AI training, we are confident that in the coming years we will develop a highly advanced solution for working with eyewear frames,” he says.
SustAInLivWork is the first competence centre of its kind in Lithuania, systematically consolidating knowledge and skills in AI. It brings together four leading Lithuanian universities – Kaunas University of Technology, Vytautas Magnus University, Lithuanian University of Health Sciences, and Vilnius Gediminas Technical University – in partnership with Tampere University (Finland) and Hamburg University of Technology (Germany).
It is a long-term, cross-sectoral platform connecting science, business, the public sector, and society.
The SustAInLivWork project is funded by the Horizon Europe programme (No. 101059903) and by the European Union funds for 2021–2027 (Project No. 10-042-P-0001).
-
- Page administrators:
- Monika Daukintytė
- Ugnė Daraškevičiūtė