Teleophthalmology is no longer a novelty; it is a pragmatic response to a structural problem. For common and silent diseases such as diabetic retinopathy (DR), the challenge is not “whether detection is possible,” but how to sustain annual coverage with reasonable turnaround times and without saturating ophthalmology.
Clinical guidelines emphasize periodic retinal exams for people with diabetes and the value of early detection for timely treatment (for example, annual exams in many scenarios according to practice recommendations). Useful sources to guide criteria and clinical discussions: - ADA Standards of Care (retinopathy section): https://diabetesjournals.org/care - AAO Preferred Practice Pattern - Diabetic Retinopathy: https://www.aao.org/education/preferred-practice-pattern/diabetic-retinopathy-ppp
From a public health perspective, WHO frames telemedicine as a digital intervention that can improve access when implemented with governance and quality, and without replacing system strengthening:
https://www.who.int/publications/i/item/9789241550505
In this article, you will find three implementation models (hospitals, private clinics/networks, and campaigns) with an operational focus (not slides), including minimum requirements, roles, indicators, and common mistakes. At the end, we explain how we solve this at Retinar in real contexts across Argentina and Latin America.
Before choosing a model: the “minimum workflow” every teleophthalmology program needs
Regardless of context (public/private/campaign), a sustainable program typically includes:
1) Capture point
Primary care center, emergency department, office, clinic, mobile unit, or hospital.
2) Quality control
Rules and/or AI to reduce “non-evaluable images” and avoid unnecessary repeat visits.
3) Prioritization (triage)
So specialists review urgent cases first instead of drowning in normal studies.
4) Remote reading and reporting
Ophthalmology reports where it adds value: confirmation, severity, plan, and referral.
5) Referral and closed-loop follow-up
Appointment, referral pathway, treatment, and follow-up (if the loop is not closed, coverage does not count).
6) Recordkeeping and traceability
Who captured, what was decided, when, and with which data. If medical software is involved, this also aligns with clinical evaluation frameworks such as IMDRF SaMD:
https://www.imdrf.org/sites/default/files/docs/imdrf/final/technical/imdrf-tech-170921-samd-n41-clinical-evaluation_1.pdf
With that minimum workflow in mind, let’s go through the models.
Model 1: Teleophthalmology in hospitals (centralized model with referral network)
When it fits - Hospital with high demand and waiting lists for fundus exams/retina. - Referral hub hospital for a region. - Need to organize workflows (inpatient, emergency, diabetes, internal medicine).
Typical workflow
1) Capture at the hospital (or associated peripheral hospitals).
2) Quality control plus immediate recapture when needed.
3) Prioritization (clinical rules and/or AI).
4) Remote reading by ophthalmology (in-hospital or regional reading pool).
5) Referral to retina/treatment/follow-up (according to severity).
Common roles - Capture technician (nursing, imaging technician, trained staff). - Operations coordinator (appointments, internal campaigns, dashboards). - Reading ophthalmologist (plus second reader for audits when applicable). - IT/interoperability lead (EHR, PACS, messaging).
Real requirements (what usually breaks pilots) - Referral agreements (capacity/slots) and response times. - Stable connectivity plus an offline plan (for critical capture windows). - Simple indicators (see metrics section). - Exception policy: what happens with a non-evaluable image? What about incidental findings?
Advantage Governance and continuity: the hospital can act as a regional reading center.
Risk Without triage design, the hospital becomes a funnel (more captures, more backlog).
Model 2: Teleophthalmology in private clinics and networks (efficiency plus reimbursement)
When it fits - Clinic with outpatient offices and diagnostic imaging. - Provider network that wants to add a retina service without increasing specialist load. - Payers seeking better coverage and lower costs from late complications.
Typical workflow
1) Capture in clinic or satellite offices.
2) Standardized quality control (fewer complaints and repeats).
3) Risk-based prioritization for reading.
4) Remote report and recommendation (follow-up / referral / urgency).
5) In-network appointment for treatment (retina, laser, anti-VEGF, glaucoma, etc.) or coordinated external referral.
Private model keys - SLA as a product: report in 24 to 72 hours by severity. - Lightweight integration: PDF export, API, or EHR integration according to maturity. - Patient communication: reminders, instructions, and case follow-up (low-friction).
Advantage Improves productivity: specialists focus on pathology and treatment.
Risk If designed as just another exam instead of a workflow, it stays boutique and does not scale.
Model 3: Campaigns and primary care (PHC) (mass, territory-based coverage)
When it fits - Municipalities, health regions, hospitals with defined catchment areas. - Diabetes and chronic disease programs that already engage patients. - Areas with access barriers (distance, appointment bottlenecks, cost, specialist shortage).
Typical workflow
1) Capture at PHC centers or mobile units (local schedule plus opportunistic capture).
2) Real-time quality control (ideal) so patients do not leave without a usable study.
3) Immediate prioritization for rapid referral of high-risk cases.
4) Remote reading from a central reading hub.
5) Referral to hospital/retina service with protected slots.
6) Follow-up: if the patient does not attend, re-contact and reschedule.
Lessons learned - Logistics define success: timing, power, connectivity, training, replenishment. - The biggest enemy is closed-loop completion: without effective referral, campaigns do not create impact.
Advantage Maximizes access and territorial equity.
Risk Without referral agreements and follow-up, it turns into capture-for-capture’s-sake.
Minimum indicators (to know in 30 days if it works)
You do not need 40 metrics. With these 8, you can operate:
1) Coverage: eligible vs screened patients (monthly and by site).
2) Image usability: percentage gradable / non-gradable (and causes).
3) Time to pre-triage: capture to prioritization (minutes/hours).
4) Time to report: capture to final report (SLA).
5) Referral rate: percentage requiring specialist follow-up / urgency.
6) Time to effective appointment: referred to attended (days).
7) Closed-loop completion: percentage of referred patients who complete care.
8) Audit: second-read rate / disagreement rate (quality control).
Common mistakes (and how to avoid them)
Mistake 1: “Teleophthalmology equals video call”
In retina programs, scale usually comes from image tele-reading plus a referral workflow, not synchronous video consults.
Mistake 2: Capturing without quality control
This increases recaptures, complaints, and follow-up loss.
Mistake 3: Not defining who gets referred and with what priority
Triage (rules and/or AI) is what prevents saturation.
Mistake 4: No integration (even simple integration)
Even without API on day one, at least have export, records, identifiers, traceability, and dashboards.
Mistake 5: No governance and evaluation design
If AI is involved, you need evidence, monitoring, and a continuous improvement plan aligned with good practices (for example, IMDRF SaMD as a general clinical evaluation framework).
How we implement this at Retinar (and why it adapts to Argentina and LATAM)
At Retinar, we design teleophthalmology to operate in the field: public networks, private clinics, and campaigns, with heterogeneous equipment and the need to scale without adding unnecessary specialist burden.
In practice, this translates to: - Decentralized capture in PHC, clinics, or hospitals, with a flow designed to minimize recaptures. - Assistance and quality control to ensure usable studies (and reduce non-gradable images). - AI prediagnosis/triage to prioritize high-risk cases and accelerate referral. - Remote reading and reporting so specialists intervene where they add value. - Multi-camera compatibility (using installed equipment) and API interoperability when site systems allow it.
The goal is not to “add AI,” but to make the program work: broader coverage, better turnaround, and effective referral completion.
CTA: let’s design your model in 2 weeks (without an endless pilot)
If you are in Argentina or Latin America and want to implement retina teleophthalmology for a hospital, clinic/network, or campaigns/PHC, we help you choose the right model and turn it into operations with metrics from day one.
Contact us to schedule a Retinar demo and a phased implementation plan (with your equipment, your resources, and your referral workflow).