BRIGHT Institute webinar: Otter Voice Meeting Notes
Skip to main content
Otter.ai Logo
Download Otter for your meeting notes
Login
Loading...
BRIGHT Institute webinar
Doug Evans
56min
46
Unknown Speaker
00:00
Address
You
Unknown Speaker
00:50
all
right,
we're
going
to
get
started.
Welcome
everyone.
I
am
Lorien
abroms,
Professor
of
Health
Communication
at
the
George
Washington
University,
the
Milken
Institute,
School
of
Public
Health,
and
I'm
also
the
co
director
of
the
bright
Institute.
And
welcome
to
today's
webinar
from
the
bright
Institute
on
the
topic
of
AI
for
social
and
behavioral
change.
We're
going
to
have
three
wonderful
presenters
today,
and
I'll
go
ahead
and
introduce
the
three
presenters,
and
they'll
present,
and
then
we
will
take
questions
and
have
a
great
discussion
on
AI
and
health
behavior
change.
And
if
you
have
questions
during
the
presentations,
please
put
them
in
the
chat,
and
hopefully
there'll
be
time
and
we'll
get
to
them
all
by
the
end.
Okay,
so
our
first
presenter
today
is
Dr
Jordan
Boyd
Graber
and
Dr
gray
boy
Graber
is
an
associate
professor
at
the
University
of
Maryland's
computer
science
department
and
language
Science
Center,
and
an
affiliate
Professor
of
the
School
of
Information.
Jordan
is
focused
on
the
applications
of
artificial
intelligence
in
language
based
settings
such
as
question
answering
and
information
triage,
as
well
as
building
evaluations
for
these
models
and
understanding
the
relative
human
versus
computer
abilities,
very
important
and
relevant
in
this
time.
And
he's
going
to
be
talking
about
an
LLM
chatbot
that
he
has
created
for
maternal
health.
Our
second
presenter
is
Dr
Doug
Evans,
and
I'm
going
to
go
ahead
and
just
introduce
all
three,
and
then
we'll
we'll
start
off
with
Jordan.
Our
second
presenter
is
Dr
Doug
Evans.
He's
a
professor
of
Communication
and
global
health
in
the
milk
Institute
School
of
Public
Health
at
the
George
Washington
University.
And
Dr
Evans
work
focuses
on
the
translation
of
communication
and
marketing
strategies,
primarily
digital
media
based
methodologies,
into
interventions
to
promote
adoption
of
health
behaviors
and
avoidance
of
health
risk
behaviors.
And
he's
going
to
be
talking
about
his
work
with
AI
for
digital
intervention
delivery
and
evaluation.
And
then
finally,
our
third
presenter
is
Katerina
bots
you
and
Katerina
is
a
communications
professional
with
a
passion
for
human
humanitarian
work
and
a
specific
focus
on
serving
underprivileged
populations.
She
currently
works
with
the
World
Health
Organization
on
various
digital
projects,
leveraging
technology
to
make
a
positive
impact
in
the
world
and
help
people
live
longer,
healthier
lives.
She's
worked
as
a
communication
specialist
with
the
United
Nations
Office
at
Geneva,
the
International
Telecommunications
Union,
and
the
United
Nations
Institute
for
Disarmament
Research.
And
she'll
be
talking
about
the
WHO
chatbot
called
Sarah.
And
with
that,
I'd
like
to
turn
it
over
to
Jordan
to
present
on
on
maternal
health.
Go
ahead,
Jordan,
Unknown Speaker
03:49
yes,
+2
thank
you
so
much
for
having
me.
It's
a
pleasure
to
be
here.
Yeah.
So
let
me
first
say
that
this
is
a
collaboration
with
a
bunch
of
wonderful
people.
So
Neha
is
a
graduate
student
in
computer
science
who
did
most
of
the
work
in
creating
the
agent.
And
then
there
are
a
lot
of
collaborators
in
public
health
and
statistics
at
the
University
of
Maryland
who
actually
did
the
hard
work
of
getting
this
out
into
the
field
and
getting
people
to
actually
use
it.
So
the
motivation
for
our
work
is
that
maternal
outcomes
in
the
United
States
aren't
that
great
compared
to
our
peer
countries,
and
a
lot
of
this
is
attributable
to
disparate
outcomes,
particularly
for
underserved
minorities.
And
there
are
many
things
that
go
into
this,
and
Doug
can
probably
talk
about
this
better
than
I
can,
but
one
part
of
this
is
access
to
information,
and
we
are
trying
to
address
that
with
a
chat
bot
that
my
collaborators
named
Rosie.
And
the
idea
is
that
we
want
to
have
an
intervention
where
we
provide
mothers,
new
and
expectant
others
with
vetted,
trustworthy
information,
and
then
measure
how
that
changes
health
outcomes.
And
+1
I
won't
talk
about
this
today.
If
people
want
to
talk
about
this
in
the
Q
and
A,
we
can,
but
we're
also
doing
this
bilingually,
which
presents
some
really
interesting
challenges,
because
a
lot
of
the
trusted
information
in
Spanish
isn't
coming
from
the
United
States,
but
coming
from
other
places.
And
so
how
do
you
actually
deal
with
this
and
give
consistent
answers
to
questions
in
both
English
and
Spanish?
And
so
we
have
the
system.
It's
out
there.
It's
being
deployed.
People
can
go
on
their
smartphone
ask
questions
and
then
get
answers
back,
but
the
very
focus
thing
that
I
want
to
talk
about
today
is
that
many
of
these
questions
have
presuppositions,
and
so
a
presupposition
is
when
you
ask
a
question,
but
you
are
asserting
something
to
be
true,
and
sometimes
these
presuppositions
are
false.
And
so
in
the
Natural
Language
Processing
literature,
there
are
a
lot
of
examples
of
this,
like
what
linguists
invented
the
light
bulb,
or
when
did
Da
Vinci
paint
the
Sistine
Chapel?
And
so
these
are
kind
of
silly
examples,
but
a
problem
is
is
that
artificial
intelligence
basically
pretends
like
it's
doing
an
improv
skit
and
rolls
with
+1
this.
It
is
so
accommodating
to
the
desires
of
the
person
asking
the
question
that
it
basically
assumes
whatever
the
question
says
is
true
and
then
creates
an
answer
predicated
on
that.
And
so
these
kind
of
silly
examples,
like
linguists
and
light
bulbs
and
Renaissance
painters
are
fun,
but
sometimes
this
is
really
serious
business.
And
so
maternal
health
is
one
example
where
this
could
be
sensitive
and
important.
Other
things
could
be
like
asking
about
taxes
or
money
or
laws
or
immigration.
And
so
if
our
AI
systems
kind
of
just
roll
with
the
punches
and
assume
these
things
are
true,
this
could
lead
to
bad
outcomes.
And
if
you
think
about
what's
happening
in
a
health
context,
a
lot
of
times,
these
presuppositions
could
reflect
misconceptions
or
false
information
that
people
have
internalized.
And
so
for
example,
if
you
take
a
look
at
the
question,
when
should
I
give
my
baby
fever
reducing
meds
after
shots,
this
can
mean
that
the
person
believes
that
you
can
give
babies
fever
reducing
medications,
that
receiving
shots
can
cause
fever,
that
I
should
give
my
baby
fever
reducing
medicines
after
a
vaccine,
or
that
there's
some
interval
Between
which
you
need
to
give
medicines
after
a
shot.
And
some
of
these
are
true,
some
of
these
are
not.
And
if
you
ask
this
sort
of
question
to
a
doctor,
I
think
a
our
idealized
version
of
what
a
doctor
should
do
would
be
that
they
would
address
the
presuppositions
that
are
true,
maybe
reinforce
the
ones
that
are
true,
and
maybe
combat
the
ones
that
are
false,
or
maybe
probe
a
little
bit.
Why
do
you
ask
that,
rather
than
just
trying
to
answer
the
question
as
directly
as
possible?
So
this
is
our
kind
of
big
picture
goal,
and
+2
this
is
still
very
much
a
work
in
progress.
And
so
if
people
have
ideas
or
more
examples
or
more
use
cases
that
they
think
would
be
interesting
to
look
at,
we
are
certainly
in
all
ears.
Okay,
so
to
try
to
address
this,
what
we
did
is
we
tried
to
find
a
bunch
of
these
maternal
health
questions
that
have
potentially
presuppositions
from
Reddit,
from
our
own
data,
interactions
with
people
using
the
Chatbot,
and
from
the
Google
resource
natural
questions.
We
then
extracted
the
presuppositions
in
these
data,
and
then
we
labeled
those
presuppositions
as
true
or
false
with
public
health
experts,
and
then
we
crafted
what
we
thought
would
be
reasonable
answers
to
those
questions.
And
looking
at
the
data,
there
are
a
lot
of
inferences
that
can
be
drawn
from
questions,
so
about
fivefold
or
between
four
and
fivefold
inferences
based
on
the
number
of
questions,
and
of
those
inferences
anywhere
from
a
fifth
to
a
third
are
false.
So
this
is
kind
of
concerning.
So
this
kind
of
gave
us
a
further
boost
to
our
mission
of
trying
to
create
a
question
answering
system
that
can
address
these
false
inferences.
And
so
I
won't
go
+2
into
the
technical
details
here,
but
the
basic
pipeline
that
we
use
is
we
have
a
system
that
tries
to
look
at
the
possible
false
inferences
that
could
be
drawn
from
a
question.
Address
those
false
inferences
by
retrieving
targeted
information
that
could
either
support
or
refute
the
false
inferences
and
then
ask
the
abroms
engineering
a
system
to
answer
the
underlying
question
and
also
to
address
any
of
the
false
inferences
that
were
in
the
question.
We
then
did
a
study
to
check
how
people
like
the
questions
that
address
the
presuppositions
and
when
we
added
in
the
responses
to
the
presuppositions,
people
liked
the
answers
about
as
much
as
quote,
unquote,
normal
questions,
but
the
preferences
towards
these
answers
increased
dramatically
when
there
were
plausible
inferences
that
could
be
drawn.
And
so
this
is
something
that
we're
continuing
to
work
on
to
figure
out.
Well,
maybe
we
shouldn't
address
all
of
the
possible
inferences,
but
just
focus
on
the
ones
that
are
highly
probable
or
the
most
important.
So
this
is
what
we're
working
on
right
now,
and
as
I
wrap
up
and
what
I
hope
we
can
talk
about
going
forward.
I
think
this
is
a
sign
that
AI
should
not
be
one
size
fits
all,
and
that's
kind
of
the
way
that
it's
been
presented
right
now,
because
you
have
a
bunch
of
companies
trying
to
offer
a
commoditized
API
to
access
to
artificial
intelligence
as
a
service,
and
that
really
isn't
the
best
model,
because
people
do
need
precise,
targeted
answers,
but
we
don't
have
that
because
our
data
aren't
ready.
We
need
data
sets
that
reflect
who
is
asking
the
question.
We
need
evaluations
that
reflect
user
desires.
And
because
we
don't
have
the
data,
our
methods
aren't
ready
either,
and
so
we
need
alternatives
to
reinforcement
learning
with
human
feedback
that
take
into
account
who
is
asking
the
question.
What
the
question
is
about.
Should
it
be
tailored?
And
personally,
I
think
there
are
a
lot
of
tools
from
psychology,
like
item
response
theory,
that
can
help
us
do
this.
And
because
AI
doesn't
look
like
this,
our
policy
isn't
ready
either,
and
this
isn't
my
metier,
and
so
I
think
everyone
here
knows
about
this
at
a
greater
depth
than
I
do,
but
we
also
need
policy
to
reflect
this.
Need
to
balance
privacy
and
personalization,
and
that's
going
to
be
the
next
challenge.
And
so
with
that,
I
will
stop
sharing
and
looking
forward
to
hearing
from
everyone
else.
Unknown Speaker
13:10
Ray,
thank
you
so
much.
Lots
of
questions,
but
we'll
hold
them
until
we
go
through
all
three
of
our
presenters.
Our
next
presenter
is
Dr
Doug
Evans,
Speaker 1
13:22
thank
+3
you
very
much.
Let
me
share
my
screen.
Right
place
here.
Give
me
a
second.
Hey,
hopefully
you
can
see
that.
Yep,
all
right,
great.
Well,
thank
you
all
very
much.
I
am
going
to
present
on
my
work
on
social
and
behavioral
change
using
AI.
And
let
me
just
start
off
by
talking
broadly
about
the
work
that
we
do
here
at
the
bright
Institute
at
GW.
So
I
view
using
the
use
of
AI
for
social
and
behavioral
change
as
falling
within
the
broader
field
of
digital
health
interventions,
or
DHS,
as
they've
been
defined
in
the
literature.
And
what
I
mean
by
that
are
basically
any
kind
of
program
intervention
campaign
aimed
at
changing
specific
behaviors
in
a
population,
using
digital
platforms
for
the
delivery
of
that
intervention
and
also
research
to
evaluate
the
effectiveness
of
those
kinds
of
programs.
But
a
big
question
that
we
talked
about
in
a
paper
a
couple
of
years
ago
is,
how
effective
are
these
programs,
and
under
what
conditions
do
they
work?
They
don't
necessarily
work
under
all
conditions,
and
we
need
to
understand
that
context
much
better
than
we
do
now.
So
AI
can
definitely
be
used
for
social
good.
There's
a
lot
of
discussion
out
in
the
media
and
many
areas
of
literature
that
talk
about
all
the
negatives
of
AI,
but
it's
definitely
the
case
that
AI
can
be
a
major
force
for
social
good.
The
McKenzie
Institute,
just
as
an
example,
tried
to
kind
of
categorize
this
idea
a
few
years
ago,
and
they
identified
these
10
domains
of
areas
in
which
AI
has
been
shown
to
be
affected
to
at
least
a
certain
extent,
in
promoting
pro
social
causes
and
goods.
So
economic
empowerment,
education,
infrastructure,
public
and
social
sector
factors,
combating
false
news
or
polarization,
potentially,
although,
of
course,
AI
can
also
create
those
problems.
So
it's
clear
that
AI
can
be
used
for
social
good,
and
we
have
a
lot
of
use
cases
that
are
starting
to
be
built
up.
We
need
to
understand
those
use
cases
and
be
able
to
draw
inferences
from
them
to
more
effectively
design
programs
using
AI
in
the
future.
One
way
that
this
has
been
used
is
in
AI
marketing,
which
would
be
leveraging
AI
tools
and
methods
such
as
large
language
models,
to
basically
develop
insights
around
I'll
use
the
term
consumer,
but
you
could
also
+1
think
of
those
as
people,
as
beneficiaries,
if
we're
in
the
public
health
world,
trying
to
help
people
change
their
behaviors
for
good
and
also
to
personalize
the
offerings
that
we're
making.
So
these
kinds
of
approaches,
yes,
they're
being
used
very
much
in
the
commercial
world,
but
they
can
also
be
applied
to
promoting
pro
social
behaviors,
as
I'll
try
to
argue
and
illustrate
here
in
a
second.
In
a
recent
paper,
with
colleagues
Marco
bartis
and
Jeff
French,
we
identified
six
key
benefits
of
applying
AI
as
part
of
behavioral
change
programs,
and
those
are
the
ability
to
develop
more
personalized
and
tailored
supports
to
increase
scale
and
reach,
to
increase
engagement
in
programs,
so
getting
making
programs
interesting
engaging,
having
people
stay
stay
with
them
longer,
by
maintaining
interest
in
a
topic,
also
spotting
trends
and
being
able
to
respond
rapidly
to,
for
example,
emerging
public
health
threats
like
the
next
pandemic,
would
be
a
good
example.
And
also
building
communities
of
interest
in
providing
support.
So
one
example
of
this
that
we
talked
about
in
the
paper
was
the
idea
that
generative
AI
can
be
used
to
address
issues,
for
example,
like
competition
with
unhealthy
behaviors,
by
examining
large
data
sets,
identifying
user
patterns
where
people
are
potentially
engaging
in
unhealthy
behaviors,
And
then
developing
based
on
those
large
data
sets,
better
offers
of
exchanges
that
are
more
beneficial
and
thereby
out
compete
the
competition.
So
there's
a
concept
that
I've
used
and
been
applying
in
my
research
for
a
while
now,
which
is
the
idea
of
digital
segmentation.
So
one
way
of
thinking
about
this
is
like
the
the
technology
in
social
media
for
retargeting,
or
it's
sometimes
called
remarketing,
and
basically
we
can
use
this
to
segment
and
target
content
in
online
offerings
to
customized
audiences,
to
very
precisely
defined
audiences
based
On
available
data.
And
AI
basically
makes
us
enables
us
to
very
precisely
targeted
segment
and
to
recruit
participants
in
the
studies
based
on
those
precise
targets
and
segments.
So
intervention
studies
can
use
retargeting
to
basically
redirect
content
very
specifically
to
customized
audiences
and
also
to
titrate
how
much
of
that
content
is
delivered.
So
you
can
basically
design
naturalistic,
randomized,
controlled
trials
or
or
quasi
experiments,
and
look
at
the
effects
of
delivering
more
versus
less
content,
different
types
of
content,
potentially
comparative
effectiveness
sorts
of
studies.
And
all
of
this
can
be
done
through
AI
enabled
Social
Media
Research.
There's
a
lot
of
other
ways
in
which
we
can
use
AI
for
digital
behavior
change
research,
but
I'm
going
to
talk
about
this
one
specific
area
today
for
in
the
interest
of
time.
One
example
of
a
platform
that
allows
you
to
do
this
is
the
virtual
lab
platform
that
I
have
worked
with
for
a
number
of
years
now.
And
be
happy
to
talk
to
you
about
more
about
that
in
the
Q
and
A
segment,
just
to
put
a
little
more
mean
on
the
bone
here.
Basically
+1
one
thing
you
can
do
is
to
take
like
the
meta
platforms,
for
example,
Instagram,
Facebook
and
so
forth.
And
these
have
been
used
for
years
now
by
businesses
to
target
content
based
on
demographics,
interest,
your
online
behavior,
where
you
live
and
so
forth,
and
so
you
can
basically
create
custom
audience
lists
based
on
combinations
of
user
data.
So
for
example,
like
demographics
plus
online
activity,
you
develop
a
very
precise
segment,
and
you
can
market
content
that's
more
relevant
to
that
segment.
That
information
can
be
used
to
direct
content
to
users.
And
behavioral
interventions
can
use
exactly
the
same
approach
as
I'll
show
you
in
a
second.
So
the
basic
idea
of
retargeting,
for
example,
if
a
user
maybe
is
interested
in
getting
an
online
data
degree,
or
they
want
to
go
to
University
of
Maryland
or
GW,
or
something
like
that,
and
get
an
online
degree,
well,
okay,
you're
going
to
get
this
content
that's
relevant
to
that
area
of
interest,
and
maybe
to
who
you
know
the
AI
thinks
you
are
in
this
case,
so
an
example
of
an
ad
that
you
might
get
there.
So
that's
basically
the
idea
of
retargeting.
So
we've
been
using
this
technology
to
do
online
research.
I've
done
a
number
of
studies
using
this
in
the
past
few
years
on
topics
like
vaccination,
maternal
and
child
health,
obesity
prevention,
and
in
particular,
nicotine
use
prevention,
smoking
and
vaping
prevention.
So
I'll
talk
about
that
briefly.
Just
to
illustrate
this
idea,
we
used
retargeting
to
identify
social
media
users
who
were
young
adults,
18
to
24
year
olds
living
in
the
USA,
and
we
recruited
them
into
an
intervention
using
paid
advertising
on
social
media,
on
the
meta
platforms,
and
we
designed
a
randomized
control
trial
to
examine
vaping
and
smoking
outcomes,
and
then
we
did
a
30
and
60
day
follow
ups.
We
rapidly
recruited
over
1800
participants,
and
we
were
able
to
randomize
them
to
different
levels
of
exposure.
So
basically,
once
they're
recruited
into
the
study,
we
can
direct
content
to
those
individuals
based
on
what
study.
So
these
were
for
campaign
themed
anti
vaping
ads.
I'll
show
you
an
example
in
a
second.
And
we
basically
were
attempting
to
apply
a
dose
response
curve
of
exposures
and
then
examining
the
effects
of
exposure
levels
on
different
outcomes
related
to
vaping
over
a
60
day
period.
+2
All
of
this
is
delivered
by
Chatbot.
So
it's
an
AI
enabled
study,
and
so
we
were
using
social
media
to
identify
the
audience
segments,
and
you
see
the
basic
study
design
here,
how
we
recruited
and
randomized
them
so
people
weren't
eligible,
and
then
the
AI
based
chat
bot
was
used
to
recruit
people,
and
then
we
screened
them
into
the
study
using
chats,
Using
the
Facebook
Messenger
app
to
confirm
eligibility.
These
are
just
some
examples
of
recruitment
ads.
We
kind
of
got
a
little
bit
more
sophisticated
as
we
went
through
and
developed
slightly
Speaker 1
22:39
this
+2
kind
of
static,
generic,
Amazon,
get
some
Amazon
money,
add
to
something
slightly
here
works
anyway,
and
we
use
social
media
segmentation
by
to
advertise
to
these
young
adults.
And
the
chat
bot,
as
I
said,
was
used
for
recruitment,
and
then
we
recruited
into
the
research
using
that
segmentation.
This
is
one
of
the
ads
that
the
videos
that
was
used
anti
vaping
content
based
on
that
segmentation.
And
this
is
actually
an
excerpt
from
CVS
this
morning
where
the
CEO
of
Juul
is
essentially
admitting
that
no
one
really
knows
the
long
term
effects
of
juuling.
Not
even
Juul
whatever,
a
damning
admission,
I
would
say,
pretty
hard
hitting
piece
of
content.
And
here's
another
example
of
a
branded
anti
vaping
ad
from
the
truth
initiative,
our
partners
at
truth
initiative,
that
was
used
in
the
study.
Speaker 2
23:42
I
feel
like
you
don't
want
me
to
be
happy.
You
only
ever
wanted
my
money.
You're
just
like
your
mother.
Unknown Speaker
23:52
Oh,
depression's
Speaker 1
23:54
thick,
+1
again,
trying
to
appeal
to
humor
and
and
also,
clearly,
we
could
use
AI
and
the
results
of
the
research
that
we're
doing
to
create
even
more
entertaining,
more
engaging
ads
for
the
audience.
When
we
ran
this
RCT
over
60
days,
we
found
reduced
vape
use
intentions
at
follow
up
among
treatment
current
users
compared
to
control.
So
the
intervention
was
effective
among
current
vapors,
and
we
also
increased
anti
vape
industry
beliefs.
So
basically,
the
belief
that
the
vaping
industry
is
trying
to
manipulate
you,
trying
to
cause
harm
to
you
among
current
vapors,
compared
to
control.
So
we
demonstrated
evidence
of
effectiveness
in
this
trial,
and
that's
been
published
in
the
Journal
of
Medical
internet
research.
As
I
wrap
up,
there
are
a
lot
of
future
research
and
challenges
in
this
field.
How
do
we
expand
and
improve
the
use
of
AI?
There
are
a
lot
of
AI
for
behavior
change
challenges.
I
think,
for
example,
biased
inputs,
the
potential
to
reinforce
inequalities.
We're
not
really
engaging
in
an
interpersonal
interaction.
Is
that
going
to
be
lost
if
we
really
emphasize
these
kinds
of
interventions
over
interpersonal
interventions?
What's
the
balance?
What's
the
right
balance
there?
And
also
these
biases
may
lead
to
unrealistic
goals
or
program
expectations.
We
need
to
try
to
combat
those
and
use
the
AI,
the
data
that
we're
gathering
through
our
studies,
using
AI
to
actually
improve
the
quality
of
those
studies
based
on
on
the
AI.
So
future
research
really
needs
to
keep
up
with
changing
technologies
and
also
figure
out
how
to
translate
our
research
findings
into
public
health
impact.
And
I'll
be
happy
to
answer
questions
later
on,
and
let
me
stop
Unknown Speaker
25:47
hearing
thank
you
so
much.
Doug,
that
was
great,
and
now
we'll
turn
it
over
to
our
third
presenter,
Catarina
bots.
You
Unknown Speaker
26:07
so
+1
I
don't
know
if
you
can
see
my
screen.
Yeah.
So
hi
everyone.
Thank
you
very
much
for
inviting
me.
Thank
you
for
the
introduction
earlier,
Lorien
abroms,
as
Lorien
said,
my
name
is
Katerina
botsiou,
who
and
I
work
in
the
communications
department
in
who
headquarters
here
in
Geneva.
It's
a
pleasure
to
be
here
today
and
to
be
able
to
share
with
you
the
work
that
we
as
who
have
been
doing
in
the
digital
field.
Today.
I'm
here
to
talk
to
you
about
our
AI
enabled
virtual
human
that
we
launched
recently,
Sarah.
But
I
would
like
to
start
with
giving
you
an
overview
of
all
our
chat
bots
and
Sarah's
predecessor,
Florence,
because
Sarah
is
part
of
the
of
who's
digital
health
communications
ecosystem.
But
let's
start
with
the
basics.
What
does
who
do
with
digital
technologies?
So
as
you
know,
who
is
the
UN
specialized
agency
for
health
we
work
on
health
guidelines,
standards,
policies
and
programs
around
the
world
with
our
194
member
states,
we
work
on
all
health
topics,
and
our
work
touches
on
everyone's
lives,
from
keeping
people
healthy
and
well,
to
ensuring
that
health
systems
are
functioning
well
well
when
they
do
fall
sick,
and,
of
course,
protecting
people
from
health
emergencies.
In
terms
of
digital
who
is
interested
in
new
technologies
as
a
way
to
to
expand
access
to
health
and
health
information,
as
with
any
other
area
of
health,
we
are
led
by
evidence.
And
whilst
we're
very
much
interested
in
innovative
ways
to
reach
people
through
digital,
we
must
always
make
sure
that
our
work
is
evidence
based.
We're
also
interested
in
the
broader
environment
for
digital
such
as
security,
privacy,
data,
ownership,
confidentiality
and,
of
course,
any
ethical
concerns.
So
during
covid
and
with
the
healthcare
systems
around
the
world
overwhelmed
millions
of
people
during
that
time,
relied
on
digital
to
either
for
advice,
for
doing
remote
consultations
with
doctors
to
check
symptoms
and
more.
So
we
immediately
identified
the
need
to
to
find
alternative
ways
to
reach
out
to
people
with
health
information
around
the
coronavirus.
So
during
that
time,
there
was
something
else
that
was
circulating
on
the
digital
sphere,
and
that
was
the
spread
of
misinformation.
We
all
remember
some,
even
anecdotal
ones,
like,
if
you
eat
garlic,
you
can
cure
yourself
from
covid,
and
how
fast
all
these
spread.
So
this
is
dangerous
and
an
example
on
how
false
information
can
spread
and
its
impact
and
health
inequalities
that
can
become
worse
because
of
it.
So
in
order
to
harness
the
benefits
of
digital
and
to
effectively
combat
misinformation,
who
has
looked
at
the
use
of
chatbots,
the
idea
was
that
we
need
to
reach
people
where
they
are.
So
we
thought,
Where
do
we
spend
most
of
our
times
every
day?
On
our
phones?
The
average
person
has
something
like
40
apps
installed
on
their
phone,
but
on
a
daily
basis,
we
use
no
more
than
four
or
five.
So
what
we
did
did
is
we
partnered
with
messaging
apps
such
as
Facebook,
Viber,
WhatsApp
and
free
basics
to
get
up
to
date
covid
information
out
to
as
many
people
as
possible.
Now,
unlike
human
experts,
these
chatbots
can
talk
to
millions
of
people
in
their
preferred
language
on
their
preferred
platform,
anywhere
at
any
time.
And
the
WHO
chatbots
are
currently
available
in
26
languages
and
have
reached
over
20
million
users.
People
can
interact
with
them
to
get
information
on
covid
prevention
measures,
how
to
stop
the
spread,
and
over
time,
we
added
content
on
stress
management,
women's
health
and
tobacco
cessation.
On
tobacco
cessation,
specifically,
we
worked
on
a
42
day
challenge
that
is
still
active
on
WhatsApp,
Viper
and
Facebook
Messenger,
and
we
have
also
worked
with
partners
to
to
work
on
building
the
first
digital
AI
health
worker
that
can
help
people
who
quit
tobacco,
old
Florence.
So
as
I
said,
Florence
is
who's
first
ever
digital
health
worker.
She
She
essentially
is
an
autonomously
animated
digital
person
that
was
launched
on
the
WHO
website
in
july
2020,
and
was
created
in
partnership
between
who
soul
machines,
Google
and
Amazon
Web
Services.
During
this
first
iteration,
she
was
using
a
predefined
corpus,
and
could
speak
to
users
about
covid
19,
covid
19
vaccines,
Mythbusters
around
covid,
tobacco
cessation
and
its
link
to
covid.
She
could
also
help
users
make
a
quitting
plan,
and
also
refer
them
to
toll
free
quit
lines
and
apps
that
could
help
them
on
their
quitting
journey.
A
bit
later
on,
and
as
we
were
moving
away
from
covid,
Florence
2.0
was
launched,
and
she
was
launched
as
a
global
ambassador
on
NCDs,
non
communicable
diseases.
So
in
addition
to
the
topics
that
she
could
already
speak
about,
she
was
expanded
to
cover
epithelial
topics
on
NCDs,
such
as
physical
activity,
healthy
eating
and
mental
health,
but
mostly
in
the
sense
of
stress
management.
I
would
now
like
to
introduce
you
to
Sarah
so
recently,
and
more
specifically,
in
april
2024
who
relaunched
Florence,
as
you
can
see
in
the
picture,
the
way
she
looks
has
been
changed,
but
this
was
actually
changed
during
the
transition
between
the
Florence
1.0
to
2.0
version.
But
yeah,
now
she
looks
different
and
different,
and
she
has
also
been
renamed
to
Sarah.
Sarah
is
an
acronym
for
a
smart
AI
resource
assistant
for
health.
Sara
is
a
prototype.
She's
a
prototype
digital
health
promoter,
and
the
big
difference
to
her
predecessor
is
that
she's
now
using
generative
AI
to
help
people
live
healthier
lives.
She
has
been
trained
with
information
from
the
World
Health
Organization
and
trusted
partners,
and
although
she
can
speak
about
a
variety
of
topic,
she
specializes
in
tobacco
and
E
cigarettes,
mental
health,
nutrition,
physical
activity,
cancer
and
diabetes.
Users
can
speak
to
Sarah
by
video
or
text
on
any
device
around
the
clock.
She's
currently
available
in
eight
languages,
the
six
Yuan
languages,
so
English,
French,
Spanish,
Arabic,
Russian
and
Chinese,
plus
Hindi
and
Portuguese.
But
in
theory,
she
has
the
potential
to
speak
any
language
that
is
supported
by
OpenAI.
Sarah
has
enhanced
conversational
skills,
can
show
empathy
by
mirroring
people's
facial
expressions,
and
harnesses
the
power
of
AI
to
speak
in
a
more
personalized
and
engaged,
engaging
way.
And
I
think
it's
important
to
stress
that
all
conversations
with
Sarah
are
anonymous
and
non
identifiable,
and
Sarah
illustrates
the
potential
of
artificial
intelligence
to
deliver
health
information
to
anyone
with
an
Internet
connection,
and
is
by
no
means
to
be
considered
a
medical
a
medical
tool,
as
I
said,
+5
She's
powered
by
generative
AI,
and
we
have
put
in
place
guardrails
in
order
to
make
SATA
source
reliable
evidence
backed
information,
we
do
take
we
have
taken
a
user
centric
approach
with
SATA,
and
we
do
take
user
feedback
very
seriously,
we
have
put
a
survey
in
Place.
This
survey
can
be
accessed
either
on
the
WHO
landing
page
for
Sarah
or
while
in
conversation
with
her.
And
in
this
slide,
I
would
like
to
show
you
some
of
the
user
quotes
that
we
have
received
in
the
first
two
months
since
Sarah's
launch.
So
Sarah
has
been
really
helpful.
I'm
happy
today
as
I
found
Sara
as
my
supportive
hand.
I'll
talk
to
Sarah
whenever
I
need
to
talk
to
her.
Sarah
is
very
helpful.
At
times
you
don't
want
to
share
your
secrets
with
anyone,
yet
it
is
very
depressing,
and
sharing
would
help.
So
Sara
is
the
right
person
to
share
with,
since
she
doesn't
know
me
and
will
not
judge
me
and
such
a
great
way
to
increase
access
to
mental
health
services.
But
I
think
I
will
stop
here
and
let
Sarah
speak
for
herself.
00:00
00:00
1x