Technology

Home / Posts tagged "Technology"
HOW CULTURAL INSTITUTIONS ARE BENEFITING FROM DIGITIZATION OF PHOTO ARCHIVES

HOW CULTURAL INSTITUTIONS ARE BENEFITING FROM DIGITIZATION OF PHOTO ARCHIVES

“Today digital technology is pervasive. It is mandatory that museums, libraries, and archives join with educational institutions in embracing it.”

  • Wayne Clough, Author, Best of Both Worlds

Museums and cultural institutions are leaving no stone unturned to digitize history. Archiving photos form an integral part of documenting history. Continuing with our previous post on how cultural institutions are leveraging photo archiving, in this post, we will detail why museums and cultural institutions should leverage photo archiving.

Easy Sharing and Distribution

Unlike physical copies, scanned photos can be easily shared across multiple locations with multiple users. Easier to track electronically, it is also cost effective for researchers and curators as it eliminates the need for physical reproduction and mailing.

Prepare for Disasters

Museums and cultural institutions are not free from the risk of losing valuable content. Natural calamities like earthquakes, floods, heavy rains, or hurricanes and tsunamis have destroyed museums and libraries over the centuries, resulting in the loss of valuable content. Digitization will curb the risk of loss of valuable photographs.

Save Cost and Clutter

Maintaining physical copies of photo prints requires physical storage space and involves cost. Digitizing photos can save institutions cost that is involved in keeping physical copies and make it easier to share and reproduce.

Source of Revenue

Owners of photos of rare events and occurrences can generate a revenue stream in terms of royalty or licensing fee. Different types of models can be adopted like selling prints through your own website, third-party portals, exhibiting in galleries etc.

Tip for Successful Photo Digitization – Prioritizing Which Items to Digitize

Depending on the priority and goals, every institution shortlists the photos that need to be digitized.  Some questions that organizations need to ask before selecting the images for digitization are:

  1. Are the records unique?
  2. Do the photos appeal visually?
  3. Who will be the prospective consumer of the digitized images?
  4. Does the demand justify the cost that will be incurred to digitize the photos?
  5. Will digitization add any value to the picture?
  6. How will the institution control access to the digitized images? Will, there be any restriction or can it be accessed openly?
  7. Does the institution have the legal right to scan?
  8. What is the long-term preservation strategy of the photos being digitized?
  9. What is the metadata that will be required?

Once institutions have selected items that need to be digitized, here are some critical considerations while scanning photos.

  1. Once you have a flatbed scanner ready, set the scanner, photoshop, and the printer to the same color space – CMYK or RGB.
  2. To capture many shades of gray (which is essential especially for black and white photos), choose the right DPI. Depending on the size of the picture, DPI should be around 3000 – 4000 pixels along the length of the image.
  3. Choose the format of preservation carefully. For Masterfile, the recommended format is TIFF.
  4. Save a JPEG copy for easy distribution among researchers.
  5. To avoid damage and file loss, keep the Master copy separate from the distributed copy.

Photo/ image archivists should prioritize digitizing susceptible photos like colored photos and cellulose nitrate or films. The context of each of these photos should also be documented, and each item needs to have metatags to make them easily accessible in time of need. To know about the top six mistakes to avoid while digitizing photos, read this blog.

Decoding the Importance of Metadata in Digitization and Preservation of Content

Decoding the Importance of Metadata in Digitization and Preservation of Content

Introduction

Digital media has come a long way over the past decade. The shift from single-screen to multiple-screen and multi-device, from the subscription-based model to OTT service providers is apparent over the years. Keeping in line with the demand, broadcasters are also broadening their distribution channel.

With the audience having a wide variety of choice to consume video across platforms at their preferred time – broadcasters are leaving no stones unturned to digitize video content, even those dating back to decades.

Broadcasters are now focused on aggregation and distribution of highly-targeted content that reaches narrow-interest audiences. As broadcasters develop and store digital content to use and reuse across devices and platforms, the value of good shareable content is increasing.

However, the problem lies elsewhere. An estimated 98% of archived media is not available for digital distribution.[1]

Why?

Migrating hours of media content from tape to digital storage is time-consuming. Though automated migration systems convert tapes to multiple digital formats simultaneously, tagging these files to make them searchable is a challenge.

Have you ever wondered how – when you Google – some videos top the search results? With an average of 300 hours[2] of video content being uploaded to YouTube alone every minute, content producers and owners sweat over making their content optimized for search results.

The solution

The key to ensuring that your content doesn’t get lost in the crowd is tagging it with relevant keywords. While search engines have evolved over the years, they are still not human – hence can’t read/watch your content. They need a hint (or metadata) to understand the content and apply analytics to list them. While filtering, the search engine follows the following order – title, description, and tags. If you optimize these three, half of the battle is won.

In this paper, we will explore:

  • What is metadata?
  • Types of metadata
  • Metadata Schema Models
  • The importance of metadata in content digitization
  • Optimizing metadata for content digitization

What is metadata?

Metadata refers to “data about data.”[3] It represents a detailed description of the underlying data within an object concerning its title, date & time of creation, format, length, language, year of reference, narration describing the object’s identity & purpose, etc.

For long-term digital archiving, metadata refers to the preservation techniques that are applied to the digital objects in the archives. Metadata does the following:

  • Helps in easy identification, location, and retrieval of information by the end-users
  • Provides information about quality aspects or issues of the created object along with its access privileges/rights
  • Ensures smooth data management

Types of metadata

Depending on the nature of data and usability in a real-world scenario, metadata can be categorized as:

  • Descriptive: Helps to identify, locate, and retrieve information related to an object through indexing and navigation to related links. It includes elements such as title, creator, identity, and description
  • Structural: Defines the complexity of an object along with the role of individual data files, ordering of pages to form a chapter, file names, and their organization, etc.
  • Administrative: Helps to manage the resources in terms of its creation, methods, access rights, associated copyright, and the techniques required for preserving it
  • Rights: Defines access permissions and constraint over the stored objects and information contained in them at different levels
  • Preservation: Records activities or methodology opted in the archive for preserving digital data.
  • Technical: Provides technical information embedded with the digital object (content files). It describes attributes of the digital image (not the analog source of the picture) and helps to ensure that the image will be rendered with accuracy, capture process of the data, and their transformation.
  • Provenance: Records object’s origin/nativity and the changes that were performed to these objects for its resolution, format, perspectives, etc.
  • Tracking: Keeps track of the data at different stages of the workflow (data automation processes, digital capturing, transformation, processing filters and toolsets, enhancement, quality control and management, and data archival and deliverables)

For long-term digital preservation, two types of metadata play a crucial role:

  1. Packaging Metadata

Defines three kinds of information packages, which are as follows:

  1. Submission Information Package (SIP) – Contains information delivered to the archive from the content provider
  2. Archival Information Package (AIP) – Related content information stored in the archive
  3. Dissemination Information Package (DIP) – On request delivery of information to the user
  1. Preservation Metadata

Records the process that supports the preservation of digital data

Metadata Schema Models

According to ISO 23081[4], a schema is “a logical plan showing the relationships between metadata elements, normally through establishing rules for the use and management of metadata specifically as regards the semantics, the syntax and the optionality (obligation level) of values.”

The amount of metadata that needs to be stored for an object depends on its functional usage & significance. With a large amount of metadata already there, and more being published regularly for a different purpose by different communities, metadata schema designers need unique experience of using the Semantic Web to consider a metadata schema.

For long term preservation of data, a varying Metadata Schema Models has been developed, which includes the following:

  • MARC: Machine Readable Cataloguing
  • MARCXML: XML version of MARC 21
  • METS: Metadata Encoding & Transmission Standard
  • MODS: Metadata Object Description Schema
  • DCMI: Dublin Core Metadata Initiative
  • CDWA: Categories for the Description of Works of Art
  • CRM: CIDOC Conceptual Reference Model
  • MPEG-7: Moving Picture Coding Experts Group
  • EAD: Encoded Archival Description
  • RDF: Resource Description Framework
  • VRA CORE: Visual Resources Association
  • DDI: Data Documentation Initiative
  • MIX: Metadata for Images in XML Standard
  • IEEE LOM: Institute of Electrical and Electronics Engineers Standards Association for the description of “learning objects”

The importance of metadata in content digitization

Metadata plays a key role in processing, managing, accessing, and preserving digital content –be it audio, video, or image collections. Metadata has the following key functionalities:

  • Search: To search for data associated with a file like Author, Date Published, Key Words, etc.
  • Distribute: To determine when and where the content will be distributed
  • Access: To determine delivery of targeted content based upon preset rules matching metadata values
  • Retain: To determine which records to archive

Optimizing metadata for content digitization

The importance of metadata lies in the fact that it makes the content searchable – both online and offline. While filtering, the search engine follows the following order – title, description, and tags. Some key points to remember while using metadata for content digitization are:

Optimize the title

Grab the attention with a catchy and compelling title. To make a title search engine (and mobile) friendly, limit it to 120 characters and include your top keywords. Think what the audience would relate to, and make the title informative and relevant.

Optimize the description

Follow and include the keywords, and detail what the content is all about. Limit the most critical information within the first 22 words of your description – as search engine displays it on the list before you click ‘see more’ button.

Optimize the tags

A couple of things to keep in mind while tagging a digital asset are:

  1. Assign keywords that cover the 5 W’s – what, when, who, why, and where – to make it a well-captured asset
  2. Avoid grammatical errors while assigning keywords
  3. Avoid ambiguous words or words with multiple meanings
  4. Be consistent with abbreviations and acronyms
  5. Use a minimum of 8 – 12 tags per asset

Conclusion

Metadata plays a crucial role in keeping track of content right from its inception to its processing and accessibility. It provides a complete description of the purpose and functionality of the data, making it easier for end-users to locate and retrieve the data. Therefore, it is crucial that all contents should have embedded metadata in them.

[1] https://www.recode.net/2014/4/8/11625358/modernizing-the-entertainment-industry-supply-chain-in-the-age-of

[2] https://merchdope.com/youtube-stats/

[3] https://www.techopedia.com/definition/1938/metadata

[4] https://committee.iso.org/sites/tc46sc11/home/projects/published/iso-23081-metadata-for-records.html

Pros and Cons of Linear Tape Open (LTO) for Long-term Content Archiving

Pros and Cons of Linear Tape Open (LTO) for Long-term Content Archiving

Originally developed in the late 1990s, LTO (Linear Tape Open) is a magnetic tape data storage technology that offers extensive storage for a variety of applications comprising, long-term archive, data back-up, high-capacity data transfer, and offline storage almost over the past two decades.

The LTO technology has shown vast up gradation with new features added to its subsequent generations (1 – 8) including write-once, read-many (WORM); data encryption; and partitioning to enable a Linear Tape File System (LTFS) that aids in enhancing its overall performance in terms of storage capacity, speed, data transfer rate (MBps), digital encoding methods and compression techniques.

An overview of the LTO generations is depicted hereunder:

LTO Type Year of Introduction Generation Native Capacity Compressed Capacity Compression Ratio Data Native Transfer Rate Compressed Data Transfer Rate

LTO-1

2000 1 100 GB up to 200 GB 2:1 20 MBps

40 MBps

LTO-2

2003 2 200 GB 400 GB 2:1 40 MBPS

80 MBPS

LTO-3

Late 2004 3 400 GB 800 GB 2:1 80 MBPS

160 MBPS

LTO-4

2007 4 800 GB 1.6 TB 2:1 120 MBPS 240 MBPS

LTO-5

2010 5 1.5 TB 3 TB 2:1 140 MBPS 280 MBPS

LTO-6

2012 6 2.5 TB 6.25 TB 2.5:1 160 MBPS 400 MBPS
LTO-7 2015 7 6 TB 15 TB 2.5:1 300 MBPS

700 MBPS

LTO-8

2017 8 12 TB 30 TB 2.5:1 360 MBPS

750 MBPS

Pros & Cons of data storage on LTO

Pros:

Storage Capacity & Costs

Archival on LTO for industries dealing with huge data size costs cheaper & effective as compared to storage on internal hard drives. The LTO Data Archival has shown rapid growth in sectors like media, entertainment, data analytics, science where there is a continuous flow of data throughout the operations. With the advent of the latest generation LTO-8 (as depicted above), one can store 12TB of uncompressed data @360 MBPS data transfer rate and 30TB of uncompressed data @ 750 MBPS on a single taped costing about $100.

Life Span/Durability

The LTO cartridges offer extensive lifespan with an average cycle of 30 years along with high-end backup & recovery throughout its life cycle.

Data Mobility

Transferring voluminous data through networks is an expensive & time-consuming process and may also lead to data crash /corruption in cases of any link or interoperability failures. Also, there are probabilities of unauthorized data access over the internet acting as a great threat towards the confidentiality of data.

LTO, on the other hand, provides an easy & rational means of data exchange physically over tape from one location to another.

Technology Upgrade

The LTO technology has shown remarkable growth since years with new releases every 2 to 3 years highlighting expansion in storage capability, increased data transfer rate & advancement of data compression & encryption solutions.

The LTO Program group has laid a product timeline with new releases up to LTO-12 delivering incremented storage capacity and performance growth.

Disaster Recovery

As backup data stored on LTO are preserved offline, the data is safe from any sort of virus attacks or malware and whole data can be restored as per necessity & requirements.

Cons:

Operational costs

The overall operational cost in Tape based archival is comparatively high as the cost of LTO drives that are used for digital recording in order to store data on magnetic tapes, range from $2000-$3500 and that may rise up for enterprise versions.

Keeping up with the technology

LTO1 was introduced in the year 2000, 12 years later LTO 6 was introduced, so a new version every second year. Typically what happens is that LTO’s are migrated every second generation since the writer and the reader only support 2 or 3 generations of tapes. If we record on LTO 6 and leave it on the shelf for 60 years there will for the guarantee not be a reading device available and with a very high probability, most of the data will be gone.

Tapes are not random access like hard drives

The LTO drive that does a digital recording of data on magnetic tape is only capable of moving tape in a single direction. As such, only sequential access storage can be possible in tapes. This adversely affects the speed of storage & retrieval of data due to its constraints of linear technology.

Due to limitations of linear technology, if new data is inserted/existing data modified in between leads to erasure of data beyond the point of insertion or modification. Data has to be necessarily added to tape right from the point of last written sector to avoid any deletion of existing data. This sometimes leads to data replication and also minimizes the optimal use of storage space of LTO tapes.

Conclusion:

Whether LTO and LTFS are optimum for storage depends on the amount of data that need to be archived and also their frequency of access by end users. There is no doubt that LTO is an ideal media for the offline preservation and protection of data for completed projects. LTO’s exclusive features make it too competent for long-term data retention and content archive applications.

LTO-tape data backup seems to be more consistent, durable, and cost-efficient for data archiving in long-term supported by an offsite tape vaulting service.

LTO Tapes serve as a better option for archiving huge amount of data in long-term especially for those industries that produce a substantial amount of data all through its lifecycle as media, entertainment, survey, medical records, verdict, library etc.

Whereas in-house disk system-or even cloud storage-can work efficiently for data that needs to be accessed frequently under low access latencies. Also, the ability for random access & modifications of existing data stored in disk minimizes the chances of data replication.

Six Ways Broadcasters & Media Organizations are Leveraging Big Data Analytics

Six Ways Broadcasters & Media Organizations are Leveraging Big Data Analytics

Have you ever wondered what led Netflix to invest around $50 million for each season of the ‘House of Cards?’ Or how Chennai Express broke the Box Office collection record in 2013? The answer to all these is advanced data analytics.

The media industry is increasingly leveraging analytics to predict audience sentiment, woo the new audience, and retain the existing audience. Be it OTT media service providers like Netflix or Amazon prime, or the film and music industry; marketers are using advanced analytics and machine learning to generate a pull for their content.

As technology, social media, and analytics become available to the media industry to leverage the power on the Internet; we look at six ways in which media and entertainment industry is using the power of analytics:

Generating targeted content

Data-driven decisions are the future of media and entertainment industry. With the huge amount of data available for analysis to draw inferences, predict customer preference, and decide on what will work – the industry is no more dependent on intuition to make a series or a film work. For example, Netflix claimed that while they invested on the series House of Cards, they already knew it would be a hit – thanks to the viewership data that helped them analysis viewers’ habits over many millions of show views.

Optimizing scheduling of content

Big data gives the power to media houses to collect data from diverse sources and understand customer preference – be it the type of content, the time, or the device used. Using advanced analytics, they can then optimize the scheduling of content. For example, on a local holiday, broadcasters can stream popular movies, or more home-oriented content during afternoons.

Optimized scheduling is not only limited to general analysis, but a more detailed prediction based on browsing history, weather conditions, or time of the day.

Relevant recommendations

Considering the massive amount of data that the media and entertainment companies generate daily, analyzing it to gain insights into the popular genres or preferred time is not an easy task. However, if appropriately interpreted with a good recommendation engine, the data can increase user engagement manifold by providing an effective recommendation.

Media and entertainment companies are increasingly using machine learning and advanced analytics tools to analyze viewership data in real-time and provide relevant recommendations to the audience.

Targeted advertising

Thanks to big data, analysts have a better understanding of the consumption behavior across multiple platforms. With advanced segmentation and complete customer views, companies can micro segment customer to personalize ads. Targeted ads will ensure that the right people view the right ads, increasing the click-through rate, thereby increasing conversion rates and ROI.

Retaining and wooing viewers

Data gives insight into why viewers subscribe or unsubscribe to a channel. Media and entertainment companies can use the viewership data to device the best product and promotional strategies to attract new viewership and prevent churn of existing viewers.

Finding newer sources of revenue

Considering the ever-changing equation of customer preference and new technology, it is essential for media and entertainment companies to explore new sources of income continually. Advanced analytics can help companies do that – identify additional sources of revenue apart from advertising or partnerships. For example, companies can create a proprietary platform using their exclusive data and earn revenue from advanced advertising.

While the media and broadcast industry has always primarily relied on data in the form of ratings, viewership, TRPs, etc. to measure success, advanced analytics has taken it a step further. By analyzing real-time data from multiple sources, predictive analytics is now not only helping them measure success but also strategize future.

7 Document Management Trends to Watch Out For

7 Document Management Trends to Watch Out For

Businesses world over are undergoing a digital transformation and organizations are investing in tools, technology and processes to go paperless by converting their documents to digital formats. These documents may be forms, invoices, letters books, journals, photos, maps, manuscripts, office records or other printed materials.

Apart from being eco-friendly, the benefits of going digital are many, ranging from instant access, distribution and longevity to reduced cost of storage and more. As the world goes digital, let us look at some of the key document management trends that are shaping in the future:

Cloud-Based Document Management

With cloud storage being around for quite some years now, the initial hesitation has given way to adoption. The ease of accessibility and scalability has fueled the adoption of cloud storage for document management. Cloud storage ensures availability of documents on the go – without the need of being within a closed network.

Social Integration

Digital record managers are integrating social media technology into document management, making collaboration, storage, organization, and revision of the files a seamless task. Integration with social media also has the advantage of sharing documents in varied formats across various platforms for a wide range of audience.

Artificial Intelligence

With artificial intelligence gaining popularity, document management will witness new search capabilities in the coming year. AI will make search simpler with the right keywords and voice search, allowing professionals to focus on their work rather than spending time searching for the documents that they need.

Robotic Process Automation

Robotic process automation is gradually gaining popularity to avoid mundane tasks and focus on tasks that create value. In the coming years, software “robots” will automate labor-intensive repetitive activities that are prone to errors. RPA will see more adoption for document sorting, classification, automating routine operations, and integrating unstructured data like emails, forms, photos, and files.

Mobile Access

With the changing structure of the workplace environment, as more employees are working remotely, accessibility of documents from a wide variety of devices has become a necessity. As the world becomes a global village, workers need to be able to access the documents from devices including smartphones and tablets and work on them efficiently.

As mobile usage continues to grow, document accessibility is not the only requirement for the professionals who are always on-the-go, document management software also needs to be user-friendly to provide a seamless experience.

Collaboration

As geographical boundaries deem to exist, professionals are no longer dependent on face-to-face interaction. While working on emails already exists, it can be cumbersome and confusing. Collaboration and project management tools has been around and it is now becoming a part of mainstream platforms, which will make document management easier.

Coming years will see document managers collaborating online within a single social space in real-time which will make working on and managing multiple projects faster irrespective of the geographical location of the professionals.

Scalable Solutions

Scalability is a necessary criterion to keep up with the growing volume of documents. Modern Document Management Solutions will not only have the capability to let employees collaborate and edit on a single platform but will also enable the clients to scale up with the growing number of users, storage requirement, multiple locations or the volume of documents. Cloud based solutions will have an edge as they’d be able to offer pay-per-user or pay-as-per-storage pricing models.

Cloud computing, collaboration, and the proliferation of mobile devices are making lives easier for document management professionals. Coming years will continue to see the growth and mass adoption of technology giving birth to integrated, user-friendly solutions, blurring the physical space issues.

5 Technology Innovations in the 2018 FIFA World Cup

5 Technology Innovations in the 2018 FIFA World Cup

FIFA World Cup has always been the center of introducing new technologies when it comes to sports broadcasting. For example, 2014 saw technologies like vanishing spray, goal-line technology, and mind-controlled robotic suit in action.

As 2018 FIFA World Cup kicks off, we look at the five technology innovations that will be used for the first time in the most significant soccer competition of the world.

1. Video Assistant Referee (VAR)

With the International Football Association Board approving VAR in March 2018, Russia is witnessing the first World Cup using video technology as an additional tool for referees. FIFA has deployed VAR at all the matches, where a dedicated team with a lead VAR and three assistant VARs is located at the Moscow International Broadcast Centre.

According to FIFA, VAR is used as an additional tool to “correct clear and obvious errors and missed incidents in clearly defined match-changing decisions.” 33 broadcast cameras and two dedicated offside cameras are transmitting feeds directly to the VOR using optical fiber – eight of which are in super-slow motion and four in ultra-slow motion. VARs can also speak to the referees on the ground using a fiber-based radio system.

2. Electronic Performance and Tracking System (EPTS)

In 2018 World Cup, the coaches of all the 32 playing teams have access to a tablet-based system to track the statistics and real-time video footage of the players. Based on wearable technology, EPTS is a camera-based system that captures data via two optical tracking cameras on the main stand and selected tactical cameras.

EPTS is considered as the second significant innovation for FIFA after VAR. Each team has three tablets – one each for the analyst on the bench, in the stand, and the medical team. The system will also provide statistics around player positioning, passing, speed, and tackles, and match footages with a maximum of 30-seconds delay.

3. Adidas Telstar Ball

For the first time in the history of FIFA, Adidas, the official manufacturer of the World Cup match ball since 1970, has introduced NFC chip in the ball. Telstar 18 – the ball used for World Cup 2018, includes a Near Field Communication (NFC) chip apart from a new carcass and panel design. The chip allows the ball to communicate with a smartphone. Adidas claims that the new model will dramatically improve the performance durability of the ball.

4. Virtual Reality (VR) and 4K UHD Video

4K UHD videos that were tested in 2014 Brazil World Cup is made available to the broadcasters in 2018 Russia World Cup. BBC has confirmed that 4K streams of the match are available on BBC iPlayer on a first-come-first-served basis.

BBC has also made a VR feed available via its VR application to give the viewers a feeling of watching the match from a private box at the stadium.

5. 5G Services

Though yet to be available commercially, the 2018 World Cup is a testing ground for the 5G services for the network providers. Ericsson and MTS have confirmed to install 5G-capable radio equipment covering cover fan zones, stadiums, transportation hubs, and famous landmarks across more than 40 sites in seven of the 11 host cities.

Additionally, Megafon and TMS – the official communications partner, is holding trials of the 5G network across cities during the event.

While the players play on the ground, sports broadcasters are busy competing for innovations off the field to make the experience more real-time and competition more transparent. But for now, enjoy the game!

 

Leveraging Artificial Intelligence in Digitization

Leveraging Artificial Intelligence in Digitization

Digitization is a necessity today – both for restoring and making it searchable. Be it physical libraries or digital media, media organizations and content owners are investing in digitization and archiving of legacy content. Organizations often spend hours in recreating or searching for content that already exists. Aged and untreated content, discounting metadata, and not choosing the right storage solution often takes a hit on the broadcasters.

While you’re oblivious, artificial intelligence (AI) is changing this scenario. Think of personalized playlists on YouTube or Spotify or recommendations on Netflix and Amazon Prime; broadcasters are using AI to curate a selection of tailormade content.

Few weeks after Donald Trump was elected, the Internet Archive’s TV News Archive aggregated more than 520 hours of televised Trump speeches, debates, interviews, and other broadcasts way back from 2009. Thanks to the Trump Archive, the footage doesn’t get lost in the crowd of news giving journalists, scholars and citizens a chance can track and analyze Trump’s statements on public policy issues.

Netflix claims to save about US$1bn annually due to AI technology’s ability to automate workflows and reduce customer churn.

After Wimbledon 2017, IBM Watson used a cognitive algorithm to produce highlight reels of what it believed were the best shots of the tournament. By automatically analyzing audio and video from the footage to identify highlight worthy shots and points, artificial intelligence saved hundreds of manhours of editors.

Here are five ways in which artificial intelligence is revolutionizing the way we archive, process, and store documents and extract information out of it.

Automated processing

Optical character recognition can recognize texts. AI can additionally read, classify, and automate workflows based on that information in minutes. Initially fed with a set of rules, AI uses machine learning to improve its identification and processing capabilities.

Data extraction

Data extraction reaches a whole new level with AI-powered document management system, which can accurately read the information and understand the context.

Document clustering

AI can also group unclassified documents based on topics, which can help organizations understand the documents within a larger context, find resemblances, and draw conclusions that would otherwise be time-consuming or impossible.

Advanced security

Document management system powered by AI can help impose user access. By using secure biometric techniques like facial recognition to identify employees who can access the data, it can prevent unauthorized viewing or alteration of documents.

Data analytics

Cognitive platforms as a service (PaaS) like Microsoft Azure Cognitive Services and IBM Watson apply techniques like predictive analytics, machine learning, and data visualization to analyze the collected data to improve decision making.

The way ahead…

At IBC2017, for the first time, AI was one of the main themes, which speaks loads about its adoption. Recently, a company named Ripcord has patented and built robots to scan and sort a box full of paper from business cards to legal documents and enter the contents into a searchable database in the cloud. As AI adoption across industries is increasing, we can only hope to see better and faster analysis, and improved decision making across the broadcast industry.

5G – The Next Generation Network Is Here

5G – The Next Generation Network Is Here

Pace is the pin-up word in today’s world. Everything should be speedy, efficient, clear with minimum latency time. And it won’t be inappropriate to say that 5G or 5th Generation best represents that. 5G, the much anticipated future network aims at higher capacity communication network which is faster, denser and provides ultra-high-definition output. In short, better implementation of the Internet of Things.

Going by the predictions, there would be 550 million 5G subscriptions by 2022 and 10% of the world population would be covered by 5G networks. 1

5G in Television and Media

Who can deny the impact 4G has made on media and broadcasting industry? It became the trigger point of television’s changing landscape where it brought television viewing from a television set or our computers to individual mobile interface. It paved the way for huge consumption of mobile videos and helped expand the market for everything be it films, music, news, television shows, or any other form of video content.

5G network aims to surpass 4G standards manifolds in terms of data bandwidth, frequency, technology supremacy, high-quality streaming and reduced network congestion.

  • Disruptions Foreseen in Broadcast Industry

While 5G will provide us world class viewing experience, it could also open the door for some serious industry disruption.

  • Innovative Content

Content consumption will bear a major impact due to huge technical improvements in terms of speed and quality. Consumers will enjoy huge improvements with a significant decrease in download and upload speeds.

Almost zero latency is surely going to feed the impatient DNA of the viewers’. This will also help narrow the gap between quality and speed and live streaming of content and virtual reality content will see an uprise in the market forcing its creators for more innovative, original and creative content.

  • Value Chain Effects

The Internet has made ‘Content its King’ keeping major profit margins towards the content innovators. The onset of 5G is predicted to divert the route towards distributors forcing content providers to pay more for efficient streaming of their content.

  • Consumption Effects

Streaming content has been a winner to date with low costs and inferior technology being the key reasons. 5G bringing a major change in download speed bringing it down to microseconds will make downloaded content more feasible and popular in coming times.2

Challenges

But there are two sides to a coin. On one hand, it provides an ideal environment for television broadcast with its top class features like enhanced network speed and technology advancements, it also hints at becoming a threat to the standard ways with which we have watched content until now through cable, satellite, IPTV and broadcast providers, market of which is approximately $500 billion. 3

Some of the challenges 5G would bring in are :

  • Out of the box, content has to be offered to leverage the huge shift from contemporary to mobility.
  • Data rates would be something to watch out for as all advancements lead to increase in costs.
  • Stability and consistency will play a major role in the network’s success keeping in mind the continuous increase in the number of users.
  • The efficiency of end-to-end providence will determine the real-time feasibility of 5G network.
  • Huge investments would be required to upgrade the technology and meet 5G standards.

The Future

It is too soon to comment or anticipate the future of 5G Network. If we look backward, each generation which has come up has aimed at fixing flaws of its predecessors. First mobile network in 1980’s was followed by GSM in 1990. 3G arrived at the onset of the century and LTE rolled out in 2010. 4G was introduced to make consuming data a less unpleasant experience. The work is still in progress and if we go by statistics, 4G is yet to be even launched in various parts of the country.

But the trail seems to break here. It seems difficult to think of any major challenge we can put across 5G which is worth such huge infrastructural investments and changes. Right now, 5G is only a concept whose standards have yet to be established. It is likely to take few years to finalize the whole 5G structure. The foundations are being laid with lots of funding coming from EU, South Korea, US, and the UK to build up 5G research facility.

The momentum is surely building up. A super-fast, super-efficient wireless network is all set to make its mark in the media world by 2020.  It promises to provide us the ability to watch television content over a 5G network connection rather than fixed broadband, cable or satellite in its best form. In fact, the conjunction of speed and technological advancements can create an ideal environment for the television market.

The industry knows what it wants. Internet of things, telehealth systems, smart city infrastructure are some of the features set to figure in 5G thinking. What finally forms a part of 5G spectrum, only coming years would tell. 4

Top six trends that are shaping the future of television

Top six trends that are shaping the future of television

We don’t know how, when and why Television has always been called an Idiot Box world over. On the contrary, it is a smart device which has encapsulated our attention unabashedly over the longest period of time since its inception and the first television service by British Broadcasting Corporation in 1936. Today, even after 80 years, it rules our heart and is still a major source of entertainment and global information.

The television industry in the last few decades has grown extensively and the wave continues. Whether in terms of technology, state-of-the-art looks or content, many key players in a highly competitive market keep up with the pace of development. Advancements in terms of knowledge, exposure, urbanization, increased buying power and a continuous shift in user preferences keep the innovators on their toes to think beyond the edge.

In terms of technology, the end users might think that now the peak has arrived, and there is nothing more that innovators can come up with. But we also agree with the mystic power of human brain who, if one can anticipate the current trends, is surely preparing for something which will take television technology to an altogether another level of viewing.

Some of the trends that are shaping the future of Television include:

  1. Holographic TV

BBC has always been a front-runner as far as anything concerned with TV goes. It has trialed for a technology and content wherein a fairly big size TV is laid flat and simple, old Victorian theatre techniques are used to create 3D images which seem to float in the air.1

Though a Holographic TV is still at its nascent stage the initiation is applaudable.

  1. Data Analytics

The revolution, “Big Data Analytics” aids smart viewership. Early adopters like Netflix have used it extensively to create a niche and specialize in the domain as creative as content production. We hope to see it being widely adopted, more hands-on to optimize produced or acquired content. The steps are simple; behavioral data is collected from various sources, classified and judged to help identify end-user preferences.

  1. Virtual Reality

In terms of technology, Virtual Reality predictably is the next big thing which aims at completely revolutionizing the concept of TV watching. It’s an ever-growing popular culture wherein a consumer after wearing a VR headset can explore virtual, computer-generated worlds. It replicates an environment and simulates user’s real-time presence and allows full interaction. VR, when integrated with TV shows can help a user be a part of that show. Looks a bit far-fetched at the moment, but not too far also. In fact, Director Steven Soderbergh’s new ambitious project Mosaic, an interactive narrative app is a perfect example of this concoction. It is due to release soon where the audience becomes a part of the narration and gets the leverage to decide how the story should unfold. It’s a new way of storytelling and irrespective of its response and real-time success paves way for more such experiments.2

  1. Virtualization

Virtualization creates an ecosystem where independent services can share a common platform. It will surely take some time to realize this completely but cloud-based broadcasting wherein the content is put on public clouds for smooth broadcasting and viewing experience, is fast catching up. In fact, coming years will see major investments in cloud solutions. It does away with huge hardware investments and their maintenance and broadcasters benefit in terms of scalability and high levels of efficiency. They are extremely cost-effective with reduced turnaround time and helps manage viewer demands to a large extent. Predictably, a cloud utility model is surely going to turn Broadcasters into Orchestrators whose job would be to deliver aggregated content. The transition is already on its way. Key players like BBC, Disney/ABC Television have started making the shift as the world moves towards Virtualization. 3

  1. Immersive and Interactive Experience-Augmented Reality

Technologists have always aimed for maximum consumer participation. Tools are being developed to involve our sensory powers to blur the line between the real and digital world thereby making the viewer part of the content. A mid-world is created where 3D and 4D images give the brain a real-time perception and the user feels more involved. 3D audio effects surround sound helps manipulate a sound one hears and provides a more real-time effect. One can easily confuse this with Virtual Reality but Augmented Reality deals more with the real world. It enhances the experience by adding drama to it. And Broadcasters are making hands-on use of it by developing more interactive and engaging shows with maximum audience participation.

Plans to introduce Interactive Advertising by Channel 4 on British TV is one step forward to it where watchers would enjoy the liberty to choose different ads, watch different content or even buy the products instantly.4

  1. Humanoids, AI comes to Television Broadcasting

Robots have always fascinated us since time immemorial. Recently, a real-time robot or a Humanoid, Sophia surfaced on BBC television featuring as a spokesperson on BBC’s Earth TV. It is made of frubber (flesh rubber) and is highly sophisticated. It has a human face and also emulates real-time human emotions. It is not functioning in artificial intelligence. Rather it has scripted answers. One can only anticipate the whole experience of television watching in the future with more such humanoid interactions. 5

On the Whole

Above are some of the trends observed that are going to shape the future of TV. And it doesn’t stop here. The list is ever growing. Technologists and innovators are constantly aiming for utmost consumer participation and keep them more involved and engaged. Broadcasters are trying really hard to absorb new technologies and amalgamating them to give their audiences different forms of experiences thereby entertaining them. The game is actually to play with the human mind and show them something which ups their curiosity level. It is a new age fairy tale times which is illusionary and beautiful and forces one to be a part of those moments.

Are people still watching movies on cinema screens?

Are people still watching movies on cinema screens?

No matter how many screens we add up, the charm of the cinema screens is still not slowing down. Driven by the growing audience, increased number of screens, and disruptive technology film industry continues to expand the world over.

A recent research by Research and Markets 1 indicates that the global movie and entertainment industry is expected to reach an estimated US $139 billion in 2017 with a CAGR of 4.2% over the next five years.

While as per the estimates by the Motion Picture Association of America (MPAA) total worldwide box office rose merely by 1% in 2016 valued at $38.6 billion in ticket sales. North America (USA & Canada) largely dominate box office accounting to $11.4 billion in ticket sales growing by 2% since 2015 as an individual territory. International markets made up 71% of the global box office in 2016 valuing $27.2 billion, compared with 63% a decade ago 2.

Though bets placed on China’s growth did not meet predictions, yet cinema market is still touted to grow at a staggering 11.6% CAGR from US$6.2 billion in 2016 to US$10.7 billion by 2021 3. Domestic films still command 58.33% of the total box office collection in the country where the release of foreign films meets a restricted number. 2016 saw over 92 foreign films released in China, the highest in country’s history. An average 26 new screens a day were added for the theatre-going audiences in 2016, making it a total of 41,179 movie screens, the most in the world 4.

Still holding the spot for top film producing country, India’s box office market valued at US$1.9 billion in 2016. Overall Indian film industry is projected to grow at CAGR of 7.7% over the next five years. The year 2017 would show higher growth results with the worldwide release of Bahubali 2: The Conclusion (2017) and Dangal’s (2016) release in Asian markets. Dangal has become the highest grossing Indian film ever collecting over US$290 million worldwide. Overseas theatrical release from India witnessed 14% growth year-on-year a reflection of cinema produce and export is being appreciated in the global market. Collections from the theatrical release of Dangal in China alone is close to US$191.03 million 5.

Piracy is one of the biggest threat for filmmakers globally coupled with management of talent. Filmmakers are adopting innovative marketing by premiering of movies on OTT, Pay-TV platforms and incorporating digital storage for content security techniques. Technology along with stories will continue to be the key forces to drive the audience to watch films.

You might also be interested in “[Infographic] Filming For Entertainment: How Big Is The Screening?”