
<rdf:RDF xmlns:edm="http://www.europeana.eu/schemas/edm/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:dcterms="http://purl.org/dc/terms/" xmlns:skos="http://www.w3.org/2004/02/skos/core#" xmlns:ore="http://www.openarchives.org/ore/terms/" xmlns:svcs="http://rdfs.org/sioc/services#" xmlns:doap="http://usefulinc.com/ns/doap#" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#">
  
<ore:Aggregation rdf:about="https://phaidra.ustp.at/o:5474/#Aggregation">
  
<edm:aggregatedCHO rdf:resource="https://phaidra.ustp.at/o:5474"></edm:aggregatedCHO>

  
<edm:dataProvider>University of Applied Sciences St. Pölten</edm:dataProvider>

  
<edm:isShownAt rdf:resource="https://phaidra.ustp.at/o:5474"></edm:isShownAt>

  
<edm:isShownBy rdf:resource="https://phaidra.ustp.at/api/object/o:5474/get"></edm:isShownBy>

  
<edm:object rdf:resource="https://phaidra.ustp.at/api/object/o:5474/thumbnail"></edm:object>

  
<edm:rights rdf:resource="http://creativecommons.org/licenses/by/4.0/"></edm:rights>

  
</ore:Aggregation>

  
<edm:ProvidedCHO rdf:about="https://phaidra.ustp.at/o:5474">
  
<dc:title xml:lang="en">Towards a unified terminology for sonification and visualization</dc:title>

  
<dc:description xml:lang="en">Both sonification and visualization convey information about data by effectively using our human perceptual system, but their ways to transform the data differ. Over the past 30 years, the sonification community has demanded a holistic perspective on data representation, including audio-visual analysis, several times. A design theory of audio-visual analysis would be a relevant step in this direction. An indispensable foundation for this endeavor is a terminology describing the combined design space. To build a bridge between the domains, we adopt three of the established theoretical constructs from visualization theory for the field of sonification. The three constructs are the spatial substrate, the visual mark, and the visual channel. In our model, we choose time to be the temporal substrate of sonification. Auditory marks are then positioned in time, such as visual marks are positioned in space. Auditory channels are encoded into auditory marks to convey information. The proposed definitions allow discussing visualization and sonification designs as well as multi-modal designs based on a common terminology. While the identified terminology can support audio-visual analytics research, it also provides a new perspective on sonification theory itself.</dc:description>

  
<dc:identifier rdf:resource="https://phaidra.ustp.at/o:5474"></dc:identifier>

  
<dc:language>en</dc:language>

  
<edm:type>TEXT</edm:type>

  
<dc:type>journal article</dc:type>

  
<dc:type>Wissenschaftlicher Artikel</dc:type>

  
<dc:type xml:lang="en">Text</dc:type>

  
<dc:type xml:lang="en">journal article</dc:type>

  
<dc:type xml:lang="de">Text</dc:type>

  
<dc:type xml:lang="de">Wissenschaftlicher Artikel</dc:type>

  
<dc:subject xml:lang="en">audio-visual analytics</dc:subject>

  
<dc:subject xml:lang="en">sonification</dc:subject>

  
<dc:subject xml:lang="en">sonification theory</dc:subject>

  
<dc:subject xml:lang="en">visualization theory</dc:subject>

  
<dc:subject xml:lang="en">audio-visual data analysis</dc:subject>

  
<dc:creator>Kajetan Enge</dc:creator>

  
<dc:creator>Alexander Rind</dc:creator>

  
<dc:creator>Michael Iber</dc:creator>

  
<dc:creator>Robert Höldrich</dc:creator>

  
<dc:creator>Wolfgang Aigner</dc:creator>

  
<dcterms:isPartOf></dcterms:isPartOf>

  
</edm:ProvidedCHO>

  
<edm:WebResource rdf:about="https://phaidra.ustp.at/api/object/o:5474/get">
  
</edm:WebResource>

  
</rdf:RDF>


