admin 管理员组文章数量: 1184232
2024年3月28日发(作者:游戏编程代码大全复制俄罗斯方块)
Proceedings of the 2005 International Conference on New Interfaces for Musical Expression (NIME05), Vancouver, BC, Canada
UsingMusictoInteractwithaVirtualCharacter
DepartmentofComputing
Science
UniversityofAlberta
T6G2E8,Edmonton,Alberta,
Canada
RobynTaylor
DepartmentofComputing
Science
UniversityofAlberta
T6G2E8,Edmonton,Alberta,
Canada
DanielTorres
DepartmentofComputing
Science
UniversityofAlberta
T6G2E8,Edmonton,Alberta,
Canada
PierreBoulanger
robyn@res@reb@
ABSTRACT
Wepresentareal-timesystemwhichallowsmusiciansto
interactwithsyntheticvirtualcharactersastheyperform.
UsingMax/MSPtoparameterizekeyboardandvocalin-
put,meaningfulfeatures(pitch,amplitude,chordinforma-
tion,andvocaltimbre)areextractedfromliveperformance
xtractedmusicalfeaturesarethen
mappedtocharacterbehaviourinsuchawaythatthemu-
sician’sperformanceelicitsaresponsefromthevirtualchar-
temusestheANIMUSframeworktogenerate
mentalresultsare
presentedforsimplecharacters.
Keywords
Music,syntheticcharacters,advancedman-machineinter-
faces,virtualreality,behaviouralsystems,interactiontech-
niques,visualization,immersiveentertainment,artisticin-
stallations
UCTION
Wehavecreatedasystemwhichenablesamusiciantoin-
teractwithlife-sizedvirtualcharacterswithinanimmersive
irtualcharacters“listen”tothe
musician’sperformanceandmodifytheirbehaviouraccord-
iciancanperformspecificmusicalgestures
thathavebeenpredefinedtotriggerspecificbehaviouralre-
sponsesfromthevirtualcharactersorheorshemaysimply
performspontaneouslyandwatchthecharacters’responses
astheyunfold.
Thissystemmaybeusedformanypurposes,rangingfrom
therecreational(usersmaysimplybeentertainedbythe
abilitytocontrolvirtualcharactersthroughamusicalinter-
face)totheeducational(characterresponsescouldbeused
toencourageuserstodevelopparticularskillsets)tothe
artistic(scoresandanimationscouldbechoreographedto
produceacohesivevirtualizedperformance).
Inthispaper,wewilldescribethemotivationbehindthe
Permissiontomakedigitalorhardcopiesofallorpartofthisworkfor
personalorclassroomuseisgrantedwithoutfeeprovidedthatcopiesare
notmadeordistributedforprofitorcommercialadvantageandthatcopies
bearthisnoticeandthefullcitationonthefiotherwise,to
republish,topostonserversortoredistributetolists,requirespriorspecific
permissionand/orafee.
NIME’05,May26-28,2005,Vancouver,BC,Canada.
Copyright2005Copyrightremainswiththeauthor(s).
220
thendescribetheANI-
MUS[9]frameworkwhichenablesvirtualcharacterstore-
spondtouserinputbyusinganaccumulationofbasicbe-
addressthereal-timeextractionofmusi-
calfeaturesfromlivemusicalperformanceanddescribehow
theseextractedfeaturescanthenbeusedtocontrolcharac-
tion,wewilldiscusstheuserinterface
anddesigndecisionswhichmustbeconsideredwhencreat-
y,wewilldescribethe
currentstatusofthesystemaswellasfutureplansforex-
pandingittoincludemorecomplexcharacters.
TION
Modernvirtualrealityenvironmentsarecapableofpro-
ducingcompellingvisualizationswhichdrawtheuserinto
avirtualworldboundedonlybyitsdesigner’simagination.
Thesehigh-endvisualizationsystemsarecapableofdisplay-
ingstereoscopicimagesonwall-sizeddisplayscreensinsuch
awaythattheobserverfeelstrulyimmersedinvirtualsur-
primaryobjectiveofthisprojectisthatthesevisualization
toolsbeusedtoenhancetheexperienceoflivemusicalper-
formancebyallowingboththeperformer,andhisorher
audience,toenteravirtualworldpopulatedbybelievable
charactersthatreactinresponsetoreal-timemusicalinput.
Thereexistseveralexamplesofpreviousresearchintothe
y,Jack
Ox’s“ColorOrgan”[6]usesimmersivevirtualrealitysys-
tems(inparticular,theCaveAutomaticVirtualEnviron-
ment)tocreatevisualabstractionsofmusic,allowingthe
usertoexplorethecomplexityofacompositioninthreedi-
,the“SingingTree”[5],designedbyOliver
ITMediaLaboratory,usesdigitaldisplaysas
wellassculpturedsetpiecestoencloseparticipantsinavir-
tualenvironmentwhichrespondstotheirvocalutterances.
MasatakaGotoandYoichiMuraoka’sVirtualDancer,“Cindy”
[4]providesvisualenhancementtomusical“jam”sessions
bydancinginrhythmtomusicians’playing.
Additionally,immersivemusicvisualizationtechniqueshave
evinand
ZachLieberman’saudiovisualperformancepiece,“Messadi
Voce”[3],utilizeslargescaleprojectionscreensalongwith
motiontrackingtechnologiestoallowlivevocaliststointe-
gratetheirphysicalbodiesintoaprojectedvirtualenviron-
ment,visualizingtheirvocalizationswithrelationtotheir
physicallocationsontheperformancestage.
Westrivetosimilarlyreducethebarrierbetweenthemu-
sicianandthevirtualizedenvironment,enablingustocre-
Proceedings of the 2005 International Conference on New Interfaces for Musical Expression (NIME05), Vancouver, BC, Canada
Figure1:TheUserintheVirtualEnvironment
stem
allowsamusiciantoperformapieceofmusicinanatural
fashion:playingadigitalpianoandsingingintoamicro-
ualizationsystemprocessesthemusicalin-
putinreal-timeandstimulatesvisiblyresponsivebehaviour
insyntheticcharactersexistingwithinthevirtualspace.
Thevirtualenvironmentisdisplayedonalargestereoscopic
wstheparticipanttoexperiencethevirtual
spaceatanappropriatescale,asifsheorhewaspartofthis
erverswillalsoperceivetheperformerand
thevirtualcharacterstobeofsimilarsize,increasingthe
realismoftheperformer/virtualcharacterinteraction(see
Figure1).
MUSFRAMEWORK
TheANIMUSframeworkwasdevelopedinaprevious
projectbyTorresandBoulanger[9],tofacilitatethecre-
ationof“believableartificialcharacters”.They[11]describe
thenotionofabelievableartificialcharacterasfollows:
“Abelievableartificialcharactershowsitsownvirtual
thoughtsandemotions,plansandbeliefs;itcanbetricked;it
isnotnecessarilyintelligentorefficient,noralwaysmakes
thebestdecisions,butactswithintheboundsofitsroleand
mationmaynotberealistic,butsucceeds
atexpressingitsinternalstateandmakesitinterestingto
anaudience.”
Ourgoalistocreateinstancesofbelievableartificialchar-
acterswhichrespondtolivemusicaccordingtotheirown
individualpersonalities.
IntheANIMUSframework,responsivebehaviouriscon-
structedbythreeseparatelayers:
•PerceptionLayer:Thecharactersmustperceivedata
intheirsurroundings
•CognitionLayer:Thecharactersmustassessthe
datatheyreceiveanddeterminehowtheyshouldre-
spondtoit,basedontheirinternalcognitiveprocesses
•ExpressionLayer:Thecharactersmustexhibitvis-
iblebehaviourstoindicatetheircognitivestate
Weneedtocreateacharacter,basedonthesethreelayers,
whichrespondstouserinputreceivedintheformofareal-
rtodothis,weareadding
afourthlayertotheANIMUSarchitecture:
•MusicalPerceptionFilterLayer:Abankof
Max/MSP[1]objectsmustextractmusicalfeaturesfrom
221
thereal-timeperformanceandsendthemtotheper-
ceptionlayertobeprocessed
3.1PerceptionLayer
TheANIMUSProjectusesamethodofinformationor-
ganizationknownasa“blackboardsystem”.Alldatathat
isperceivablebytheANIMUScharactersisenteredonthe
blackboard,andthisblackboardismonitoredandmodified
temmustaddinformationabout
thelivemusicalperformancetothisblackboardinorderfor
theANIMUScharacterstoperceivethemusicasaninput
is,ourperceptionlayerinterfaceswiththe
specializedMusicalPerceptionFilterLayerwhichidentifies
specifition4
forfurtherdetailsonthemusicalfeatureextractionprocess.
3.2CognitionLayer
TocreateANIMUScharacterswhichrespondinabeliev-
ablefashiontomusicalinput,weneedtocreateacognition
layerthatassignsrelevantmeaningtothemusicalfeatures
themostcomplex
stepoftheresponsivebehaviouralprocesssincethisiswhere
thecharacter’sillusionof“personality”
ordertocreateanANIMUScharacterthatexhibitsadis-
tinctpersonalityinadditiontoappearingtobeawareofthe
musicinitssurroundings,wemustcreateacomplexcogni-
tionlayerwhichmapstheincomingmusicaldatatomod-
ifiersaffectingthevirtualcharacter’suniqueinternalstate
cussionofhow
musicalinputisusedtoinfluencecharacterbehaviour,see
Section5.
3.3ExpressionLayer
AnANIMUScharacterexpressesitsinternalstatebyex-
ecutinganappropriatesequenceofanimationsproduced
bytheexpressionlayer,usingkeyframebasedanimation.
Theseanimationsarecreatedatrun-timebyusinganinter-
polationschemethattakespredefinedkeyframesandgen-
eratesintermediatetransitions.
LPERCEPTIONFILTERLAYER
Alivemusicalperformancecontainsawealthofreal-time
informationwhichweashumansperceiveasmeaningful.
Inorderforoursyntheticcharactertoassignanymeaning
toastreamofauditoryinformation,itmustreceivethis
streaminasimplifichosentocreatean
additionallayertointerfacewiththeANIMUSframework,
layerweparse
thestreamofincomingauditorydatainordertoidentify
importantmusicalfeaturestosendasinputtotheANIMUS
system’sperceptionlayer.
Toextractfeaturesfromlivemusicalperformanceweuse
Max/MSPtomonitorinputfromamicrophoneandMIDI
controller.
Wedeterminetheuser’svocalpitchandamplitudeusing
Puckette,ApelandZicarelli’sfiddle~object[7].Addition-
allywemakeuseofthefiddle~object’sabilitytodescribe
theharmonicspectraoftheuser’temexam-
inestherawpeakdataoutputbyfiddle~andgeneratesa
numericaldescriptionofthevocaltonebasedontheweight-
ingoftoneamplitudeatthefundamentalfrequencyversus
multiplesofthefundamentalfrequency.
版权声明:本文标题:ABSTRACT Using Music to Interact with a Virtual Character 内容由网友自发贡献,该文观点仅代表作者本人, 转载请联系作者并注明出处:http://www.roclinux.cn/p/1711596266a601582.html, 本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌抄袭侵权/违法违规的内容,一经查实,本站将立刻删除。
发表评论