Blog

Home / Archive by category "Blog"
HOW CULTURAL INSTITUTIONS ARE BENEFITING FROM DIGITIZATION OF PHOTO ARCHIVES

HOW CULTURAL INSTITUTIONS ARE BENEFITING FROM DIGITIZATION OF PHOTO ARCHIVES

“Today digital technology is pervasive. It is mandatory that museums, libraries, and archives join with educational institutions in embracing it.”

  • Wayne Clough, Author, Best of Both Worlds

Museums and cultural institutions are leaving no stone unturned to digitize history. Archiving photos form an integral part of documenting history. Continuing with our previous post on how cultural institutions are leveraging photo archiving, in this post, we will detail why museums and cultural institutions should leverage photo archiving.

Easy Sharing and Distribution

Unlike physical copies, scanned photos can be easily shared across multiple locations with multiple users. Easier to track electronically, it is also cost effective for researchers and curators as it eliminates the need for physical reproduction and mailing.

Prepare for Disasters

Museums and cultural institutions are not free from the risk of losing valuable content. Natural calamities like earthquakes, floods, heavy rains, or hurricanes and tsunamis have destroyed museums and libraries over the centuries, resulting in the loss of valuable content. Digitization will curb the risk of loss of valuable photographs.

Save Cost and Clutter

Maintaining physical copies of photo prints requires physical storage space and involves cost. Digitizing photos can save institutions cost that is involved in keeping physical copies and make it easier to share and reproduce.

Source of Revenue

Owners of photos of rare events and occurrences can generate a revenue stream in terms of royalty or licensing fee. Different types of models can be adopted like selling prints through your own website, third-party portals, exhibiting in galleries etc.

Tip for Successful Photo Digitization – Prioritizing Which Items to Digitize

Depending on the priority and goals, every institution shortlists the photos that need to be digitized.  Some questions that organizations need to ask before selecting the images for digitization are:

  1. Are the records unique?
  2. Do the photos appeal visually?
  3. Who will be the prospective consumer of the digitized images?
  4. Does the demand justify the cost that will be incurred to digitize the photos?
  5. Will digitization add any value to the picture?
  6. How will the institution control access to the digitized images? Will, there be any restriction or can it be accessed openly?
  7. Does the institution have the legal right to scan?
  8. What is the long-term preservation strategy of the photos being digitized?
  9. What is the metadata that will be required?

Once institutions have selected items that need to be digitized, here are some critical considerations while scanning photos.

  1. Once you have a flatbed scanner ready, set the scanner, photoshop, and the printer to the same color space – CMYK or RGB.
  2. To capture many shades of gray (which is essential especially for black and white photos), choose the right DPI. Depending on the size of the picture, DPI should be around 3000 – 4000 pixels along the length of the image.
  3. Choose the format of preservation carefully. For Masterfile, the recommended format is TIFF.
  4. Save a JPEG copy for easy distribution among researchers.
  5. To avoid damage and file loss, keep the Master copy separate from the distributed copy.

Photo/ image archivists should prioritize digitizing susceptible photos like colored photos and cellulose nitrate or films. The context of each of these photos should also be documented, and each item needs to have metatags to make them easily accessible in time of need. To know about the top six mistakes to avoid while digitizing photos, read this blog.

Decoding the Importance of Metadata in Digitization and Preservation of Content

Decoding the Importance of Metadata in Digitization and Preservation of Content

Introduction

Digital media has come a long way over the past decade. The shift from single-screen to multiple-screen and multi-device, from the subscription-based model to OTT service providers is apparent over the years. Keeping in line with the demand, broadcasters are also broadening their distribution channel.

With the audience having a wide variety of choice to consume video across platforms at their preferred time – broadcasters are leaving no stones unturned to digitize video content, even those dating back to decades.

Broadcasters are now focused on aggregation and distribution of highly-targeted content that reaches narrow-interest audiences. As broadcasters develop and store digital content to use and reuse across devices and platforms, the value of good shareable content is increasing.

However, the problem lies elsewhere. An estimated 98% of archived media is not available for digital distribution.[1]

Why?

Migrating hours of media content from tape to digital storage is time-consuming. Though automated migration systems convert tapes to multiple digital formats simultaneously, tagging these files to make them searchable is a challenge.

Have you ever wondered how – when you Google – some videos top the search results? With an average of 300 hours[2] of video content being uploaded to YouTube alone every minute, content producers and owners sweat over making their content optimized for search results.

The solution

The key to ensuring that your content doesn’t get lost in the crowd is tagging it with relevant keywords. While search engines have evolved over the years, they are still not human – hence can’t read/watch your content. They need a hint (or metadata) to understand the content and apply analytics to list them. While filtering, the search engine follows the following order – title, description, and tags. If you optimize these three, half of the battle is won.

In this paper, we will explore:

  • What is metadata?
  • Types of metadata
  • Metadata Schema Models
  • The importance of metadata in content digitization
  • Optimizing metadata for content digitization

What is metadata?

Metadata refers to “data about data.”[3] It represents a detailed description of the underlying data within an object concerning its title, date & time of creation, format, length, language, year of reference, narration describing the object’s identity & purpose, etc.

For long-term digital archiving, metadata refers to the preservation techniques that are applied to the digital objects in the archives. Metadata does the following:

  • Helps in easy identification, location, and retrieval of information by the end-users
  • Provides information about quality aspects or issues of the created object along with its access privileges/rights
  • Ensures smooth data management

Types of metadata

Depending on the nature of data and usability in a real-world scenario, metadata can be categorized as:

  • Descriptive: Helps to identify, locate, and retrieve information related to an object through indexing and navigation to related links. It includes elements such as title, creator, identity, and description
  • Structural: Defines the complexity of an object along with the role of individual data files, ordering of pages to form a chapter, file names, and their organization, etc.
  • Administrative: Helps to manage the resources in terms of its creation, methods, access rights, associated copyright, and the techniques required for preserving it
  • Rights: Defines access permissions and constraint over the stored objects and information contained in them at different levels
  • Preservation: Records activities or methodology opted in the archive for preserving digital data.
  • Technical: Provides technical information embedded with the digital object (content files). It describes attributes of the digital image (not the analog source of the picture) and helps to ensure that the image will be rendered with accuracy, capture process of the data, and their transformation.
  • Provenance: Records object’s origin/nativity and the changes that were performed to these objects for its resolution, format, perspectives, etc.
  • Tracking: Keeps track of the data at different stages of the workflow (data automation processes, digital capturing, transformation, processing filters and toolsets, enhancement, quality control and management, and data archival and deliverables)

For long-term digital preservation, two types of metadata play a crucial role:

  1. Packaging Metadata

Defines three kinds of information packages, which are as follows:

  1. Submission Information Package (SIP) – Contains information delivered to the archive from the content provider
  2. Archival Information Package (AIP) – Related content information stored in the archive
  3. Dissemination Information Package (DIP) – On request delivery of information to the user
  1. Preservation Metadata

Records the process that supports the preservation of digital data

Metadata Schema Models

According to ISO 23081[4], a schema is “a logical plan showing the relationships between metadata elements, normally through establishing rules for the use and management of metadata specifically as regards the semantics, the syntax and the optionality (obligation level) of values.”

The amount of metadata that needs to be stored for an object depends on its functional usage & significance. With a large amount of metadata already there, and more being published regularly for a different purpose by different communities, metadata schema designers need unique experience of using the Semantic Web to consider a metadata schema.

For long term preservation of data, a varying Metadata Schema Models has been developed, which includes the following:

  • MARC: Machine Readable Cataloguing
  • MARCXML: XML version of MARC 21
  • METS: Metadata Encoding & Transmission Standard
  • MODS: Metadata Object Description Schema
  • DCMI: Dublin Core Metadata Initiative
  • CDWA: Categories for the Description of Works of Art
  • CRM: CIDOC Conceptual Reference Model
  • MPEG-7: Moving Picture Coding Experts Group
  • EAD: Encoded Archival Description
  • RDF: Resource Description Framework
  • VRA CORE: Visual Resources Association
  • DDI: Data Documentation Initiative
  • MIX: Metadata for Images in XML Standard
  • IEEE LOM: Institute of Electrical and Electronics Engineers Standards Association for the description of “learning objects”

The importance of metadata in content digitization

Metadata plays a key role in processing, managing, accessing, and preserving digital content –be it audio, video, or image collections. Metadata has the following key functionalities:

  • Search: To search for data associated with a file like Author, Date Published, Key Words, etc.
  • Distribute: To determine when and where the content will be distributed
  • Access: To determine delivery of targeted content based upon preset rules matching metadata values
  • Retain: To determine which records to archive

Optimizing metadata for content digitization

The importance of metadata lies in the fact that it makes the content searchable – both online and offline. While filtering, the search engine follows the following order – title, description, and tags. Some key points to remember while using metadata for content digitization are:

Optimize the title

Grab the attention with a catchy and compelling title. To make a title search engine (and mobile) friendly, limit it to 120 characters and include your top keywords. Think what the audience would relate to, and make the title informative and relevant.

Optimize the description

Follow and include the keywords, and detail what the content is all about. Limit the most critical information within the first 22 words of your description – as search engine displays it on the list before you click ‘see more’ button.

Optimize the tags

A couple of things to keep in mind while tagging a digital asset are:

  1. Assign keywords that cover the 5 W’s – what, when, who, why, and where – to make it a well-captured asset
  2. Avoid grammatical errors while assigning keywords
  3. Avoid ambiguous words or words with multiple meanings
  4. Be consistent with abbreviations and acronyms
  5. Use a minimum of 8 – 12 tags per asset

Conclusion

Metadata plays a crucial role in keeping track of content right from its inception to its processing and accessibility. It provides a complete description of the purpose and functionality of the data, making it easier for end-users to locate and retrieve the data. Therefore, it is crucial that all contents should have embedded metadata in them.

[1] https://www.recode.net/2014/4/8/11625358/modernizing-the-entertainment-industry-supply-chain-in-the-age-of

[2] https://merchdope.com/youtube-stats/

[3] https://www.techopedia.com/definition/1938/metadata

[4] https://committee.iso.org/sites/tc46sc11/home/projects/published/iso-23081-metadata-for-records.html

Six Steps For Restoring Your Old Films

Six Steps For Restoring Your Old Films

Did you know that 50% of all full-length features produced before 1950 have vanished? Fewer than 20% of features from the 1920s survive in complete form; survival rates of 1910s is <10%?[i]

While more than 90% of the world cinema produced before 1929 can no more be restored and are lost forever, the major players – from restoration agencies to film production houses – are trying to revive old classics digitally.

Film restoration is an archaeological expedition for curators. Apart from factors like dust, scratches, film grains, shrinkage, and color fade, heritage films are also at-risk due to climate conditions, lack of training in film preservation, and sometimes, unstable political conditions.

Film restoration is crucial for the preservation of films, especially those whose original elements have substantially deteriorated. The critical steps of restoring a film are as follows:

  1. Film identification: Film restoration is a costly and labor-intensive process, sometimes consuming more than 1,000 staff-hours to repair a film. Therefore, it is essential to identify the films that need to be restored.
  2. Film treatment and repair: Curators clean the films using chemicals, cleaning machines. Further, they use splicing tape, film cement, or ultrasonic splicers to repair perforations and tear on a film before using it on projectors, printers, and other sprocket-driven film equipment.
  3. Digitization/ Scanning: Curators scan each frame into a digital file before proceeding with restoration. The back-up copy replicates the video and audio content of the film and ensures the copy can be used in the future to create subsequent viewing copies.
  4. Film comparison: Before proceeding with the restoration, curators compare all the known surviving source materials to ensure the chosen version is the best available version for restoration.
  5. Digital restoration: A widely used restoration format today, the films are restored using digital or hybrid techniques, and the output can be in film or digital form. Digital restoration also incorporates the following:
    • Comparing each frame to its adjacent frames
    • Fixing the frame alignment
    • Restore areas blocked by dirt and dust by using parts of images in other frames
    • Restore scratches by using parts of images in different frames
    • Reducing film grain noise
    • Restoring sound
    • Correcting flickering, lighting, and color changes, even minimal, from one frame to another due to the aging of the film
  1. Digital asset management: It is essential to create a set of database records with metadata and other relevant information that allows end users to identify, locate, and retrieve a film from the archive.

 

From documentaries to fictional narratives, newsreels, industrial films, home movies, political ads, and travelogues, films are a witness of the past. By restoring these works, we can illuminate our heritage with the power and immediacy unique to film. To know more about film restoration, read:

 

[i] http://besser.tsoa.nyu.edu/howard/Talks/cineteca-mexicana.pdf
Key considerations for digitizing hospital records

Key considerations for digitizing hospital records

With the ever-growing population leading to an increasing number of patients every day, hospital staff and doctors find it difficult to maintain medical records on paper. The traditional system of keeping records is not only cumbersome but also has other challenges like:

  • Slow: With information being exchanged mainly through calls, fax, or mail, the process of information transfer is prolonged, leading to loss of time, sometimes life for critical patients.
  • Lack of unified view: Patient information is scattered across departments like doctor, lab, pharmacy, and hospital, making it difficult to access across departments and doctors. Hence, often doctors and hospitals missed out on relevant information like drug allergies.
  • Storage: With a paper-based system, storing all the data is a challenge both in terms of space and cost. Moreover, patients need to carry a physical copy of reports, prescriptions, and their medical history, which is not feasible in case of emergencies.

Thanks to technological advancement, hospitals, and doctors are resorting to maintaining records electronically, which can be accessed both by doctors and patients across any device anytime.

Medical record management involves maintaining all records of a patient throughout their lifecycle from creation, receipt, maintenance, and use to disposal. Medical records include a patient’s history, clinical findings, diagnostic test results, pre- and postoperative care, patient progress, and medications.

While the benefits of maintaining medical records electronically are many, we have listed some of them below:

  • Access and storage: Storing documents is cumbersome, both in terms of space and sorting. Electronic medical records not only save space but also makes sorting and search easy with tags and meta tags.
  • Cost saving: Setting up the system is costly and involve resources. However, once set up, hospitals and health professionals will need less support to manage, less security to protect, and less space to save – contributing to cost saving.
  • Security: Electronic documents are backed up onto multiple systems. Hence the loss of a document is not irreversible like paper documents. Moreover, the files are encrypted, and security access can be set to prevent unauthorized access, making the records more secure.

While it is convenient to maintain health records electronically, doctors and hospitals should consider the following[1] while transferring from paper-based reports to electronic reports:

  • Which historical patient information should be available for patient visits during and after the transition?
  • What are the best methods of converting this information to the EHR?
  • What is the best way to ensure that the converted data and information is of sufficient quality?
  • How long should the paper record be available after the conversion?
  • How long do paper records need to be kept after the transition to the EHR?
  • What is the role of printing and should it be allowed during the transition?

How to convert the data?

While there are multiple methods to convert data, cost and patient safety must be considered while choosing the mode of data entry. For example, drug allergies should be entered manually and not scanned, as scanned documents cannot be cross-referenced.

Depending on the cost, timeframe, type of data, and availability of resources, hospitals and clinics can resort to the following methods to convert the data:

Direct data entry: Items such as allergies, medications, and symptoms are loaded into predetermined data fields, which staffs well-versed in medical terminology enter into the system to ensure minimal error.

Backloading from other systems: Depending on the patient population, available historical information electronically, and final version of the patient information available, transcribed noted can be backloaded into the system.

Document imaging: Although a labor-intensive and expensive process, document imaging is necessary for reports and scans.

EMR in India

While most of the developed countries have already opted for EMR, some challenges for a country with a population like India remain. While most corporate hospitals have already started maintaining EMR, there is a rare exchange of EMRs between the hospitals. Considering most of the population is not technologically advanced and belong to rural areas, India needs a comprehensive EMR system that is easy-to-learn and user-friendly.

[1] http://library.ahima.org/doc?oid=103171#.XC9M71wzbIW

Five factors that damage audio tapes

Five factors that damage audio tapes

Cassette tapes were first produced at a mass scale in the early 1960s and became popular in the 1980s. Long before DVDs and cloud storage became popular, audio tapes and reels were used to record information. Magnetic tapes have a lifespan between 10 – 30 years and has been used to record and store sound, numeric and textual information, motion, and still images. While magnetic media adds on to the kind of artifacts, we can use to capture and store, their transience and degradability have been a concern for archivists and librarians.

To understand the reason for the degradation of audio tapes/reels, we need to delve into the components that form these tapes. Tapes have three parts – a magnetic layer, binder, and backing – all of which are potential sources of failure.

  1. The magnetic layer has a magnetic pigment suspended within a polymer binder.
  2. The binder holds the magnetic particles together and helps in recording and storing the magnetic signals written to it.
  3. The backing film supports the magnetic recording layer, which is very thin and cannot be a stand-alone layer.

All these components are susceptible to damage in the following ways:

Instabilities in the magnetic particle (top layer): If there is any change in the magnetic properties of the pigment that stores the recorded information, the recorded signals are irretrievable. The magnetic particle can become unstable due to demagnetization by an external factor like a hand-held metal detector, or suffer normal wear and tear.

Loss of lubricant in the binder: Lubricants reduce the friction of the magnetic top coat of the tape, reducing tape wear. With time, the level of lubricant decreases due to normal wear and tear, frequent consumption, degradation, and evaporation.

Substrate deformation (backing film): Polyester that is used as a substrate backing is chemically stable. However, excessive tape pack stresses, aging, and poor wind quality can cause deformation of the polyester in the substrate, thereby distorting the tapes.

Various factors result in the damage of audio tapes and reels. We have listed five of them below:

  1. Temperature and humidity: High temperatures and humidity can decrease magnetic capability, deteriorate the binder or backing of the magnetic tape, resulting in loss of readable data. Ideally, tapes should be stored at a temperature between 0° C – 23° C and in places with less than 70% humidity to prevent fungal growth and degradation.
  2. Frequent access: Frequent access reduce the life expectancy of tapes due to wear and tear. The more tape is handled, the more it is contaminated with fingerprints and debris, which reduces its life considerably.
  3. Exposure to the strong magnetic field: Strong magnetic fields like luggage screeners in airports, X-ray scanners, and metal detectors – both hand-held and walk-through – can erase information from audio tapes and reels, which uses magnetic particle to store data.
  4. Dust and debris: Dust, tape debris, and smoke particles can affect the tape when it is being played, resulting in loss of signal, and subsequently damaging the tape.
  5. Corrosive gases: Magnetic tapes are susceptible to airborne sulfides, ozone, and nitrous oxides. Bare metal particle (MP) and metal evaporated (ME) tapes, which are contained in cassettes, are affected by corrosive gases.

While storage options are aplenty now, audio tapes and reels are still of sentimental and historical value to librarians, archivists, and old people. While audio reels will degrade with time, some ways in which the decay can be contained are:

  1. Using and storing magnetic tape reels and cassettes in a clean environment.
  2. Avoiding contamination of the tapes by dirt, dust, fingerprints, food, cigarette smoke and ash, and airborne pollutants.
  3. Keeping the tapes away from strong sunlight and water.
  4. Not storing tapes near electronic or magnetic fields.
  5. Ensuring the reels are not laid flat for long periods while storing.

Having said all of that, it is best to create a back-up and copy of the information in modern formats to ensure there is no information lost.

Pros and Cons of Linear Tape Open (LTO) for Long-term Content Archiving

Pros and Cons of Linear Tape Open (LTO) for Long-term Content Archiving

Originally developed in the late 1990s, LTO (Linear Tape Open) is a magnetic tape data storage technology that offers extensive storage for a variety of applications comprising, long-term archive, data back-up, high-capacity data transfer, and offline storage almost over the past two decades.

The LTO technology has shown vast up gradation with new features added to its subsequent generations (1 – 8) including write-once, read-many (WORM); data encryption; and partitioning to enable a Linear Tape File System (LTFS) that aids in enhancing its overall performance in terms of storage capacity, speed, data transfer rate (MBps), digital encoding methods and compression techniques.

An overview of the LTO generations is depicted hereunder:

LTO Type Year of Introduction Generation Native Capacity Compressed Capacity Compression Ratio Data Native Transfer Rate Compressed Data Transfer Rate

LTO-1

2000 1 100 GB up to 200 GB 2:1 20 MBps

40 MBps

LTO-2

2003 2 200 GB 400 GB 2:1 40 MBPS

80 MBPS

LTO-3

Late 2004 3 400 GB 800 GB 2:1 80 MBPS

160 MBPS

LTO-4

2007 4 800 GB 1.6 TB 2:1 120 MBPS 240 MBPS

LTO-5

2010 5 1.5 TB 3 TB 2:1 140 MBPS 280 MBPS

LTO-6

2012 6 2.5 TB 6.25 TB 2.5:1 160 MBPS 400 MBPS
LTO-7 2015 7 6 TB 15 TB 2.5:1 300 MBPS

700 MBPS

LTO-8

2017 8 12 TB 30 TB 2.5:1 360 MBPS

750 MBPS

Pros & Cons of data storage on LTO

Pros:

Storage Capacity & Costs

Archival on LTO for industries dealing with huge data size costs cheaper & effective as compared to storage on internal hard drives. The LTO Data Archival has shown rapid growth in sectors like media, entertainment, data analytics, science where there is a continuous flow of data throughout the operations. With the advent of the latest generation LTO-8 (as depicted above), one can store 12TB of uncompressed data @360 MBPS data transfer rate and 30TB of uncompressed data @ 750 MBPS on a single taped costing about $100.

Life Span/Durability

The LTO cartridges offer extensive lifespan with an average cycle of 30 years along with high-end backup & recovery throughout its life cycle.

Data Mobility

Transferring voluminous data through networks is an expensive & time-consuming process and may also lead to data crash /corruption in cases of any link or interoperability failures. Also, there are probabilities of unauthorized data access over the internet acting as a great threat towards the confidentiality of data.

LTO, on the other hand, provides an easy & rational means of data exchange physically over tape from one location to another.

Technology Upgrade

The LTO technology has shown remarkable growth since years with new releases every 2 to 3 years highlighting expansion in storage capability, increased data transfer rate & advancement of data compression & encryption solutions.

The LTO Program group has laid a product timeline with new releases up to LTO-12 delivering incremented storage capacity and performance growth.

Disaster Recovery

As backup data stored on LTO are preserved offline, the data is safe from any sort of virus attacks or malware and whole data can be restored as per necessity & requirements.

Cons:

Operational costs

The overall operational cost in Tape based archival is comparatively high as the cost of LTO drives that are used for digital recording in order to store data on magnetic tapes, range from $2000-$3500 and that may rise up for enterprise versions.

Keeping up with the technology

LTO1 was introduced in the year 2000, 12 years later LTO 6 was introduced, so a new version every second year. Typically what happens is that LTO’s are migrated every second generation since the writer and the reader only support 2 or 3 generations of tapes. If we record on LTO 6 and leave it on the shelf for 60 years there will for the guarantee not be a reading device available and with a very high probability, most of the data will be gone.

Tapes are not random access like hard drives

The LTO drive that does a digital recording of data on magnetic tape is only capable of moving tape in a single direction. As such, only sequential access storage can be possible in tapes. This adversely affects the speed of storage & retrieval of data due to its constraints of linear technology.

Due to limitations of linear technology, if new data is inserted/existing data modified in between leads to erasure of data beyond the point of insertion or modification. Data has to be necessarily added to tape right from the point of last written sector to avoid any deletion of existing data. This sometimes leads to data replication and also minimizes the optimal use of storage space of LTO tapes.

Conclusion:

Whether LTO and LTFS are optimum for storage depends on the amount of data that need to be archived and also their frequency of access by end users. There is no doubt that LTO is an ideal media for the offline preservation and protection of data for completed projects. LTO’s exclusive features make it too competent for long-term data retention and content archive applications.

LTO-tape data backup seems to be more consistent, durable, and cost-efficient for data archiving in long-term supported by an offsite tape vaulting service.

LTO Tapes serve as a better option for archiving huge amount of data in long-term especially for those industries that produce a substantial amount of data all through its lifecycle as media, entertainment, survey, medical records, verdict, library etc.

Whereas in-house disk system-or even cloud storage-can work efficiently for data that needs to be accessed frequently under low access latencies. Also, the ability for random access & modifications of existing data stored in disk minimizes the chances of data replication.

The Future of the OB Truck

The Future of the OB Truck

Are OB trucks too expensive to survive in a world that is moving toward digital news delivery and mobile, small-screen news consumption? As Broadcasters shift to 4K UHD live broadcasting and adopt IP for experimenting with more immersive consumer-facing formats, does this pronounce the death of the OB trucks?

With the live production scenario evolving every day, many believe IP production and broadcast will phase out OB trucks in the future. Adam Cox[1] lists the cost of cabling, equipment, and production, scalability, and emergence of the 4K camera as some reasons why IP will overpower OB trucks.

But are these challenges so empowering? Let’s explore.

Outside broadcasting is broadcasting live from the event using make-shift studio equipment like a camera linked to a van to transmit the signals back to the network center. Typically, broadcasters station huge OB trucks spanning more than 10 meters on-site, having production, engineering, and sound units to cover events live from the place of occurrence.

However, with broadcasters experimenting on formats and tools, as reporting live on social media like Facebook and Instagram gains popularity, will OB trucks cease to exist? While we look for the answer, let us explore the challenges that the OB trucks face in present times and the alternatives:

The changing face of content consumption

As social media, IPTV, and OTT have evolved as a primary means of content consumption, the definition of experience has changed. Now better experience does not mean a bigger TV on the wall, but a connected device which enables users to watch their preferred content across devices as per their convenience.

This new connected world has blurred the need of having a separate facility and technology for a new form of content – be it a studio, OB truck, or a broadcast center. As a mobile camera replaces high-end cameras and live TV replaces broadcast, the technology, and facilities that primarily defined the boundaries of SDI are blurred.

Result? On-the-go/live content creation is no longer solely dependent on having OB trucks on site.

Remote production

The way broadcasters consume content has changed drastically. Multiple onsite cameras capture events and feed into the central hub of assets, from which the stakeholders can pick and choose content to suit their requirement. Content is a shared asset, and no longer has a definite start and end.

While OB trucks remain at the front line of covering live events, connectivity-focused tools promote collaboration between broadcasters. Hence, although remote productions become more popular, mobile units (or OB trucks) will be broadcaster-specific than event-specific.

New content formats

With the popularity of IP TV, content providers are now focusing on providing an immersive experience. As VR content and 360-degree videos gain popularity, OB trucks need to evolve to facilitate the production of these formats.

Moreover, with contents being consumed across platforms, OB trucks (or any production units) need to cater to the demand of content in varied formats. As production units use artificial intelligence for post-production, OB trucks need to rise from being a mere technical production facility to producing OTT and IPTV-native content.

Will mobile journalism sunset OB trucks?

Legacy broadcasters like BBC and CNN are experimenting with online video news to reach younger audiences in this changing environment. Digital players like NowThis and BuzzFeed are focusing on building an audience for platforms like Facebook and YouTube. Recently, NDTV 24X7 shot their stories on a Samsung Galaxy S8 smartphone.

Newscasters have started reporting live from the venue using applications like Skype, Hangout, Google’s Duo, etc. Mobile phone cameras are replacing DSLRs to capture superior quality images and videos to be telecasted directly.

The future

While many might argue that mobile journalism essentially does the job of OB tasks – report live from ground zero, OB trucks has much more to offer than what mobile will ever do. With its advanced facilities allowing for editing on-the-go, switch between multiple cameras, and advanced graphics among others, OB trucks have the potential to deliver high-quality broadcast live.

OB trucks are here to stay. However, to be future-ready, broadcasters need to ramp up the technology to support 4K and 8K broadcast across multiple platforms. With the Internet of Things, OB trucks need to be more integrated and advanced to deliver a world-class experience to the viewers, irrespective of the platform they choose to watch.

[1] http://hometownnetworks.tv/future-of-outside-broadcasting-ob-vans/

Six Ways Broadcasters & Media Organizations are Leveraging Big Data Analytics

Six Ways Broadcasters & Media Organizations are Leveraging Big Data Analytics

Have you ever wondered what led Netflix to invest around $50 million for each season of the ‘House of Cards?’ Or how Chennai Express broke the Box Office collection record in 2013? The answer to all these is advanced data analytics.

The media industry is increasingly leveraging analytics to predict audience sentiment, woo the new audience, and retain the existing audience. Be it OTT media service providers like Netflix or Amazon prime, or the film and music industry; marketers are using advanced analytics and machine learning to generate a pull for their content.

As technology, social media, and analytics become available to the media industry to leverage the power on the Internet; we look at six ways in which media and entertainment industry is using the power of analytics:

Generating targeted content

Data-driven decisions are the future of media and entertainment industry. With the huge amount of data available for analysis to draw inferences, predict customer preference, and decide on what will work – the industry is no more dependent on intuition to make a series or a film work. For example, Netflix claimed that while they invested on the series House of Cards, they already knew it would be a hit – thanks to the viewership data that helped them analysis viewers’ habits over many millions of show views.

Optimizing scheduling of content

Big data gives the power to media houses to collect data from diverse sources and understand customer preference – be it the type of content, the time, or the device used. Using advanced analytics, they can then optimize the scheduling of content. For example, on a local holiday, broadcasters can stream popular movies, or more home-oriented content during afternoons.

Optimized scheduling is not only limited to general analysis, but a more detailed prediction based on browsing history, weather conditions, or time of the day.

Relevant recommendations

Considering the massive amount of data that the media and entertainment companies generate daily, analyzing it to gain insights into the popular genres or preferred time is not an easy task. However, if appropriately interpreted with a good recommendation engine, the data can increase user engagement manifold by providing an effective recommendation.

Media and entertainment companies are increasingly using machine learning and advanced analytics tools to analyze viewership data in real-time and provide relevant recommendations to the audience.

Targeted advertising

Thanks to big data, analysts have a better understanding of the consumption behavior across multiple platforms. With advanced segmentation and complete customer views, companies can micro segment customer to personalize ads. Targeted ads will ensure that the right people view the right ads, increasing the click-through rate, thereby increasing conversion rates and ROI.

Retaining and wooing viewers

Data gives insight into why viewers subscribe or unsubscribe to a channel. Media and entertainment companies can use the viewership data to device the best product and promotional strategies to attract new viewership and prevent churn of existing viewers.

Finding newer sources of revenue

Considering the ever-changing equation of customer preference and new technology, it is essential for media and entertainment companies to explore new sources of income continually. Advanced analytics can help companies do that – identify additional sources of revenue apart from advertising or partnerships. For example, companies can create a proprietary platform using their exclusive data and earn revenue from advanced advertising.

While the media and broadcast industry has always primarily relied on data in the form of ratings, viewership, TRPs, etc. to measure success, advanced analytics has taken it a step further. By analyzing real-time data from multiple sources, predictive analytics is now not only helping them measure success but also strategize future.

How Cultural Institutions are Leveraging Photo Archiving

How Cultural Institutions are Leveraging Photo Archiving

Museums and cultural institutions play a valuable role in preserving the rich cultural heritage of our planet.  By recording the history of different era and communities, such institutions help us understand our history, deepening our knowledge and respect for various cultures and traditions.

However, with time, the ways of accessing the history is changing. G. Wayne Clough, the author of Best of Both Worlds, says, “Today digital technology is pervasive. It is mandatory that museums, libraries, and archives join with educational institutions in embracing it.”

To keep with this trend, photo archiving has been a prime focus of many cultural institutions. Some forerunners in this space are:

Pharos

Pharos, the “International Consortium of Photo Archives” – a joint effort of 14 institutions like the Getty and the Frick, the National Gallery of Art, the Yale Center for British Art, Rome’s Bibliotheca Hertziana, and the Courtauld Institute among others will host 25 million images – 17 million artworks and 8 million supplemental material. The Consortium aims to have 7 million images online by 2020.

Primarily aimed at scholars, Pharos uploads a work’s provenance, attribution, exhibition, conservation, and bibliographic histories. The Consortium currently has more than 100,000 images and 60,000 artworks of early Christian art from the National Gallery, classical and Byzantine art and mosaics from the Frick, statuary from the Bibliotheca Hertziana, and photographs of Roman pottery among other collectibles.

Smithsonian Design Museum

Cooper Hewitt, popularly known as the Smithsonian Design Museum has embarked on an ambitious digitization project where they have digitized more than 92 percent of the 3000-year-old museum collection.

Durham Museum

The photo archive of the Durham Museum in Nebraska documents the history of Omaha in more than 1 million images from the 1860s. Dedicated to the long-term storage of photographs to preserve a part of the past, the photos document moments like Presidents on parade, streetcars, storefronts, and images from the early days of the city.

Oslo City Museum

The Oslo City Museum, with over 2 million objects, has started archiving photos to preserve the lifestyle, history, and development of the city in time. More than 100,000 photos have already been digitized in the museum’s system.

Norwegian Labour Movement Archives and Library

Four special groups are working together to organize the collection of Norwegian Labor Movement Archives and Library, which comprises of 1,500,000 items about Oslo History in general aspect and narrative about labor history.

Google

Google has a similar project – Google Art Project – which lets users’ virtually tour 17 of the world’s major institutions like Ufizzi, New York Met, and Tate among others.

Benefits of photo archiving

While the benefits of archiving history are many, here is a list of the four prominent benefits:

  1. Reachability: With photo archiving, learning about history and culture is no more only restricted to museum booklets or guided tours. With web-based virtual walk-through and videos, museums and cultural institutions can reach out to a broader audience base.
  2. Multiple revenue sources: Photo archiving has opened new revenue sources for cultural institutions. Many museums have websites selling online tickets, replicas of artifacts, historical DVDs, and 3D immersive trips to let the audience experience history from the comfort of home.
  3. Long-term preservation of cultural heritage: Physical copies of photos and artifacts are subject to wear-and-tear and natural calamities. Digitization has made preservation of history easier and more accessible.
  4. Ease of research: Photo archiving has made researching on an era or finding the right image for a project easier. For example, Pharos, the Consortium of Photo Archives has made millions of photos accessible to the artists and researchers in a click, saving time and energy.

With digitization, consumers have easy access to media and information through connected devices, making sharing more accessible and faster. Hence, more cultural institutions are trying to expand their horizon to reach out to new audiences and digitize their collection for long-term preservation.