MediSens Conference Agenda

MediSens Conference Agenda

MediSens conference to be held in London, UK on Dec 13-14, 2016, announces its agenda:

A brief history of medical imaging
• Imaging in the pre-digital age
• The breakthrough of digital X rays – applications in medical and beyond
• Impact of digitisation on the clinical environment
• Lessons learned by a pioneer in digital X rays
• Some thoughts about the future…
Special Guest Keynote

Delivering an approach driven by clinical need
• Developments and challenges in cancer detection
• Understanding the technical and human challenges for medical imaging
• New techniques towards 3D quantitative imaging to ensure early and improved diagnosis and effective treatment
• Patient-specific diagnosis – Precision Medicine and personalised care
Dr Dimitra Darambara, Team Leader – Multimodality Molecular Imaging, The Institute of Cancer Research

An overview of research and development into molecular imaging
• Understanding the challenges in molecular imaging
• Multi modal imaging: what are the data implications?
• Assessing the advantages and opportunities of molecular imaging for patient groups v cost
• Advantages of PET/SPECT/CT, optical- PET, SPECT-MRI , PET-MRI: which mode has the best clinical potential?
Speaker TBC

Advances in (digital) single-photon detectors for PET
• Single-photon Avalanche Diode (SPAD) fundamentals
• From single SPAD pixels to arrays and systems
• Examples of state-of-the-art SPAD-based PET systems and frontline R&D activity
Dr Claudio Bruschini, Project Manager & Senior Scientific Collaborator, EFPL

Computing Tomography using a proton therapy beam: The Pravda collaboration
• Translation of technology from basic science to medical imaging
• Challenges of imaging proton beams
• Pravda integrated imaging system for proton beam computed tomography
Dr Phil Evans, Professor of Medical Radiation Imaging, Centre of Vision, Speech and Signal Processing, University of Surrey

What’s going on in the world of solid state medical imaging
• Exploring the new wave of innovation focused on CMOS image sensors in the solid state medical imaging market
• Assessing the status of CIS technology and related medical imaging applications
Jérȏme Mouly, Technology & Market Analyst, Yole Developpement

Advances in CMOS Wafer scale imagers for medical imaging

• Clinical needs of x-ray imaging modalities, how they flow down to detector CTQs (critical to quality)
• CMOS detector benefits and challenges (IQ, artefacts, reliability, cost)
• Opportunities for CMOS in medical imaging
Dr Biju Jacob, Senior Engineer, X-ray Detector Development, General Electric

How CMOS can be further leveraged to advance medical imaging
• CMOS Image Sensor (CIS) technology: where it is today
• What can CIS bring to the medical imaging community
• Where is the technology going and how could this impact medical imaging
Dr Renato Turchetta, CEO, Wegapixel

Use of hybrid-organic x ray detectors to improve the specifity and sensitivity of digital flat panel x ray detectors
• X-ray imaging with hybrid organic-inorganic conversion layers
• Scintillator particles embedded in an organic semiconducting matrix
• Digital flat panel X-ray detectors using a “quasi-direct” X-Ray conversion technology
• Hi resolution digital flat panel X-ray detectors
• DiCoMo: Combination of hybrid frontplane and active pixels backplane made of metal oxide TFTs
Dr Sandro Tedde, Senior Key Expert Research Scientist, Siemens Healthcare

Image enhanced endoscopy and the image sensor design for further diagnosis accuracy
• Introduction of technical aspect of image enhanced endoscopy
• Effort for smaller diameter of the endoscopes and high resolution of tiny camera
• Requirements for image sensor design in terms of the high occupancy of the secured pixel area
• High resolution and high S/N ratio
Koichi Mizobuchi, Deputy General Manager, Medical Imaging Technology Department, Olympus Corporation

Advances in cardiac imaging: what it means for the future possibilities of X ray technology
Prof Gary Royle, Professor of Medical Radiation Physics, UCL

Disease diagnosis in the distal lung using time-resolved CMOS single photon detector arrays
• Fibre-based sensing and imaging system allowing minimally invasive diagnostics in the distal lung
• Sensing physiological parameters like pH through exogenous fluorophores or Surface Enhanced Raman Sensors (SERS)
• Application of SPAD sensors to aid in the disambiguation between bacterial fluorescent probes and tissue fluorescent signals
Prof Robert Henderson, Institute of Integrated Micro and Nano Systems, University of Edinburgh

Closing keynote: How robotics is changing the face of surgery
• Understanding the imaging requirements for surgical robots
• Current and future computer vision specifications
• How robotics could impact diagnosis and treatment
Professor Guang-Zhong Yang, Director & Co Founder, Hamlyn Centre of Robotic Surgery & Deputy Chairman, Institute of Global Health Innovation, Imperial College London

Other than the main conference, there is also a workshop by Albert Theuwissen:

First Session:
• CMOS image sensor basics
• Rolling and global shutter
• Ins and Outs of 1-transistor, 3-transistor and 4-transistor pixels
Second session:
• From X-rays to digital numbers
• Stitching to make monolithic large area devices
• Butting to make large imaging arrays
• Binning and averaging
• Yield/cost of large area imagers
ST CEO on ToF Business

ST CEO on ToF Business

SeekingAlpha: ST CEO Carlo Bozotti updates on the company's ToF sensor business: "We had a very strong sequential increase in sales due to the success of our new specialized image sensors based on our proprietary Time-of-Flight technology. In fact, we are seeing strong momentum globally. During the third quarter, we were present in 11 new smartphones, including a new product in flagship phones launched on the market. In addition in Q3, the imaging product division turned to profit.

...in Q3 we are profitable on our imaging business. It is all new products. It's new technologies. Has nothing to do with what we had in the past. That has been completely removed. And it's a part of a strategy of the company because we want to be a big sensor company. Today the run rate that we have in sensors is already above $1 billion and we want to keep going. It's all good products. It's very important for many applications from smartphones to IoT to automobile. It is an important part of the strategy of the company. Of course, this specialty in the sensors I think is a lot of new technologies that we have developed.
"

Sony Unveils DOL-HDR Sensors

After a long hiatus, Sony unveils two new 1.62um pixel image sensors featuring Digital Overlap (DOL) HDR mode. The 1/2.9-inch 6.82M IMX326LQC and 1/2.5-inch 8MP IMX274LQC are pin-compatible and target industrial and security applications. When in DOL HDR mode, the frame rate drops to the half of the one in normal mode.


Imager Mania publishes a nice overview of different HDR approaches that Sony used over the years (in Japanese).

Caeleste News

Caeleste is nominated for the Deloitte 2016 50 fastest growing Belgian tech companies list. Made in Mechelen web site quotes the company CEO Patrick Henckes (Google translation):

"We have indeed grown in recent years with a prudent rate of 30 to 80% per year. We have an excellent reputation in the very specific niche market of image sensors, a sector in which Flanders has acquired a leading position on the world stage. Of ten firms active in our market, there are three in Flanders.

Our customers are top companies themselves, who want to make their products want to be better than the competitors and for this purpose a sensor to measure, giving it inspected all the existing sensors. We call it beyond state of the art . Something that already exists, we will not make it. Our customers are also distributed worldwide, we have to 9 am conference calls with Japan and at 6 pm conference calls with California. And our customers are in many diverse fields: aerospace (ESA), astronomy, nuclear physics, engineering, microscopy and many other scientific applications, medical applications such as DNA sequencing, edm ... You come to us all against top research areas in the world.
"

Caeleste announces its two oncoming papers to be presented at CNES Image Sensor Workshop to be held on Nov. 26-27 in Toulouse, France:

"Caeleste will present its radiation hardened design data base as well as the effects of radiation on image sensors realized in that technology. We will focus especially on the differences in behavior between ionizing and non-ionizing radiation.

The blue dots and circles show the effect of ionizing radiation on dark signal, while the red dots and circles are the effects of non-ionizing radiation (electrons in this case). The difference in behavior between the photodiode itself and the sense node will also be explained.
"


Caeleste will also present its new pixel combining photon counting for low light condition with charge integration for high flux situations.

ASICFPGA Offers HDR ISP Pipeline

ASICFPGA offers an ISP pipeline with many features, including HDR processing, but, apparently, not including small pixel support:

  • Support RGB Bayer progressive image sensor
  • Support 8 ~ 14 bit input data Bayer
  • Support image sensor of 256*256 ~ 8192*8192 size
  • Defect Correction
  • Lens Shading Correction
  • High quality interpolation
  • 3D Motion Adaptive noise reduction and 2D noise reduction
  • Color correction by 3x3 matrix
  • Gamma correction
  • HDR processing for Multiple exposure images and HDR bayer image
  • WDR (Shadow/Highlight compensation, back light compensation)
  • 2D edge enhancement
  • support AE, AWB and AF
  • Saturation, contrast and brightness control
  • Support special images (sepia, negative, solarization)


The company's demo video shows the HDR capabilities:

More Info on Canon IEDM Presentation

IEDM 2016 press kit has words on Canon paper #8.6, “A 1.8e- Temporal Noise Over 90dB Dynamic Range 4k2k Super 35mm Format Seamless Global Shutter CMOS Image Sensor with Multiple-Accumulation Shutter Technology” by K. Kawabata et al:

"Canon researchers will discuss high-resolution, large-format CMOS imaging technology for use in high-performance cameras large enough to take photographs and videos at ultra-high-definition resolution.

The Canon researchers developed a new architecture that enables the readouts of multiple pixels to be accumulated and stored in memory, and then processed all at once. This technique enabled the implementation of a global shutter while also delivering excellent noise and dark current performance and high dynamic range (92dB at a standard 30fps frame rate).
"

Chronocam Raises $15M Series B Led by Intel Capital

MarketWired, BusinessWire: Chronocam SA announces it has raised $15M in Series B round led by Intel Capital, along with iBionext, Robert Bosch Venture Capital GmbH, 360 Capital, CEAi and Renault Group.

Chronocam will use the investment to continue building a world-class team to accelerate product development and commercialize its computer vision sensing and processing technology. The funding will also allow the company to expand into key markets, including the US and Asia.

Conventional computer vision approaches are not well-suited to the requirements of a new generation of vision-enabled systems,” said Luca Verre, CEO and co-founder of Chronocam. “For example, autonomous vehicles require faster sensing systems which can operate in a wider variety of ambient conditions. In the IoT segment, power budgets, bandwidth requirements and integration within sensor networks make today’s vision technologies impractical and ineffective.

Chronocam’s unique bio-inspired technology introduces a new paradigm in capturing and processing visual data, and addresses the most pressing market challenges head-on. We are well-positioned to capitalize on this significant market opportunity; and we appreciate the confidence demonstrated by our investors as we roll out our technology to an increasing number of customers.


Light L16 Camera Article

IEEE Spectrum publishes Light Co. founder Rajiv Laroia's article "Inside the Development of Light, the Tiny Digital Camera That Outperforms DSLRs." Few quotes:

"...molded plastic lens technology had been nearly perfected over the previous five years to the point where these lenses were ­“diffraction limited”—that is, for their size, they were as good as the fundamental physics would ever allow them to be. Meanwhile, the cost had dropped dramatically: A five-element smartphone camera lens today costs only about US $1 when purchased in volume. (Elements are the thin layers that make up a plastic lens.) And sensor prices had plummeted as well: A high-resolution (13-megapixel) camera sensor now costs just about $3 in volume.

By using many modules, the camera could capture more light energy. The effective size of each pixel would also increase because each object in the scene would be captured in multiple pictures, increasing the dynamic range and reducing ­graininess. By using camera modules with different focal lengths, the camera would also gain the ability to zoom in and out. And if we arranged the multiple camera modules to create what was effectively a larger aperture, the photographer could control the depth of field of the final image.

The first and current version of the Light camera—called the L16—has 16 individual camera modules with lenses of three different focal lengths—five are 28-mm equivalent, five are 70-mm equivalent, and six are 150-mm equivalent. Each camera module has a lens, an image sensor, and an actuator for moving the lens to focus the image. Each lens has a fixed aperture of F2.4.

Five of these camera modules capture images at what we think of as a 28-mm field of view; that’s a wide-angle lens on a standard SLR. These camera modules point straight out. Five other modules provide the equivalent of 70-mm telephoto lenses, and six work as ­150-mm equivalents. These 11 modules point sideways, but each has a mirror in front of the lens, so they, too, take images of objects in front of the camera. A linear actuator attached to each mirror can adjust it slightly to move the center of its field of view.

Each image sensor has a 13-megapixel resolution. When the user takes a picture, depending on the zoom level, the camera normally selects 10 of the 16 modules and simultaneously captures 10 separate images. Proprietary algorithms are then used to combine the 10 views into one high-quality picture with a total resolution of up to 52 megapixels.

Our first-generation L16 camera will start reaching consumers early next year, for an initial retail price of $1,699. Meanwhile, we have started thinking about future versions. For example, we can improve the low-light performance. Because we are capturing so many redundant images, we don’t need to have every one in color. With the standard sensors we are using, every pixel has a filter in front of it to select red, green, or blue light. But without such a filter we can collect three times as much light, because we don’t filter two-thirds of the light out. So we’d like to mix in camera modules that don’t have the filters, and we’re now working with On Semiconductor, our sensor manufacturer, to produce such image sensors.
"

Mentor Graphics CEO on Image Sensor Market Growth

Mentor Graphics CEO Wally Rhines presents his view on the semiconductor industry and says few words on image sensor market (the link works only in Internet Explorer for me):

SK Hynix to Try Foundry/Custom Model for its CIS Business

ETNews: As Hynix moves into 13MP sensors mass production at its 300mm M10 fab in Icheon in 2017, the company plans to reduce production of the low-priced low resolution CIS at 200mm M8 plant in Cheongju. Instead, it is going to outsource the low cost sensor production to other foundries such as DDIC and PMIC. Eventually, SK Hynix is going to stop low-resolution sensor production at M8 fab.

As a part of this plan, SK Hynix publicly announced that it has recently received all SiliconFile's assets of $3.98 million (4.5 billion KRW) from SiliconFile, which is 100% CIS design subsidiary. Silicon File is becoming SK Hynix’s CIS design house. SiliconFile is supposed to find new fabless customers.

"Experts believe that variety of businesses that have been competing against each other in a field of fabless design can become customers of Silicon File." SK Hynix appointed Director Lee Dong-jae serving as the department head of foundry business department, as SiliconFile board director.

Receiving company assets from Silicon File and changing Silicon File into a design house indicate that SK Hynix is officially promoting its non-memory semiconductor business,” says ETNews industry source.

Hynix VP KD Yoo who has established and led Hynix image sensor business over the years, left the company and now is a Professor at Hanyang University.

SK Hynix in Cheongju. White building at 2 o’clock is M8

DENSO Works with Toshiba and Sony on ADAS

JCN Newswire: DENSO and Toshiba have reached a basic agreement to jointly develop a Deep Neural Network-Intellectual Property (DNN-IP), which will be used in image recognition systems which have been independently developed by the two companies for ADAS and automated driving technologies.

Because of the rapid progress in DNN technology, the two companies plan to make the technology flexibly extendable to various network configurations. They will also make the technology able to be implemented on in-vehicle processors that are smaller, consume less power, and feature other optimizations.

DENSO has been developing DNN-IP for in-vehicle applications. By incorporating DNN-IP in in-vehicle cameras, DENSO will develop high-performance, ADAS and automated driving systems. Toshiba will partition this jointly developed DNN-IP technology into dedicated hardware components and implement them on its in-vehicle image recognition processors to process images using less power than image processing systems with DSPs or GPUs.

DENSO also invests in the US-based machine learning startup THINCI. “We are thrilled DENSO is our lead investor,” said THINCI CEO Dinakar Munagala. “The automotive industry is one of the earliest adopters of vision processing and deep learning technology. DENSO’s investment in THINCI’s trailblazing solution confirms our own belief that our innovation has much to offer, not only in the automobile but in the wide range of everyday products.


JCN Newswire: DENSO announces that the image sensors provided by Sony have helped DENSO improve the performance of its in-vehicle vision sensors and can now detect pedestrians during night conditions.

Sony image sensors, which are also used in surveillance and other monitoring devices, enable cameras to take clear images of objects even at night. DENSO has improved the quality of Sony's image sensors in terms of ease of installation, heat resistance, vibration resistance, etc. to be used in vehicle-mounted vision sensors. DENSO has also used Sony's ISPs for noise reduction and optimization of camera exposure parameters to better recognize and take clearer images of pedestrians at night.

IHS on Security Market Trends

IHS publishes Top Video Surveillance Trends for 2016 report. Few quotes:

"4K video surveillance has been repeatedly touted as a major trend in video surveillance for the last 18 months and it can sometimes be challenging to see past the marketing hype. Yet make no mistake, the video surveillance market is going to 4K cameras; it’s only a matter of when rather than if. For 2016, IHS is predicting:
• Volumes of 4K cameras shipped in 2016 will remain low, less than 1% of the 66 million network cameras projected to be shipped globally. We are unlikely to see over million 4K network cameras units shipped in a calendar year until 2018.
• More “4K-compliant” cameras will be launched because of the increased use of 4Kp30 and above chipsets, meaning more cameras adhering to 4K standards, such as SMPTE ST 2036-1.
• Like the HD surveillance cameras, early 4K models offered the resolution at lower frame rates. We’ll see more cameras with a higher frame rate offered, and closer ties to other video standards.
"

eWBM Launches its 2nd Gen Dual Aperture Depth Map Processor: DR1152

eWBM Launches its 2nd Gen Dual Aperture Depth Map Processor: DR1152

PRNewswire: eWBM launches the DR1152 depth map processor, the successor to the DR1151 dual aperture processor.

The DR1152's predecessor, DR1151 (announced last year), accomplished cost reduction in the depth map processor market by implementing Dual Aperture's 4-color 3D image solution technology which extracts RGB and IR signals from a single image sensor. The DR1152 increases its depth level to 121 (nearly 8 times compared to the predecessor). Unlike the predecessor, which works only with 4-color sensors, the DR1152 now supports both 4-color and 3-color image sensors. eWBM believes that the 3-color adaptation opens a new market sector since it removes the barrier of sourcing only 4-color image sensors. This will expedite time-to-market significantly since widely available image sensors can be bought off the self.

The DR1152 supports a depth image resolution of up to XGA (1024 x 768) at 30 fps or 60 fps at VGA resolution. The technologies such as blur channel depth combining, noise reduction, and edge thinning technology are included in the new product. With all these significant improvement, DR1152 contains less gate count compared to its predecessor.

Omnivision Announces Two Purecel Sensors

PRNewswire: OmniVision announces the OV12895, a PureCel Plus-S stacked-die sensor. The 12MP sensor leverages a 1.55um pixel and high-speed architecture aimed to consumer-grade drones, surveillance systems, and 360-degree action cameras.


"We are seeing rapid growth in the markets for consumer drones, surveillance systems, and 360-degree action cameras, in part due to the increasing demand for aerial photography and 4K-resolution panoramic videos in security and virtual reality applications. The 12-megapixel OV12895 aligns well with these consumer product segments because it strikes a balance between solid pixel performance and high resolution, in a widely used 1/2.3-inch optical format," said Kalairaja Chinnaveerappan, senior product marketing manager at OmniVision. "The OV12895 builds on our latest-generation PureCel Plus-S stacked-die architecture and has many desirable features for these applications."

PRNewswire: OmniVision announces latest addition to its PureCel image sensor line, the OV2732. Due to the OV2732's HDR mode, power efficiency and small dimensions, the sensor is suited for compact and ultra low-power surveillance devices.


"The industry is seeing a tremendous increase in demand for IoT-compatible home monitoring systems. Today, resolution at 720p is already considered a mainstream segment, and interests in ultra low-power and good low-light 1080p imaging solutions are now on the rise," said Chris Yiu, senior strategic marketing manager at OmniVision. "Available in a 1/4-inch optical format, the OV2732 sets itself apart by delivering crisp 1080p HD video with advanced features such as frame synchronization and staggered HDR within a cost-effective and power efficient package. It's truly a no-compromise imaging solution for the security market."

Both sensors are currently available for sampling, and are expected to enter volume production in Q1 2017.
ams Acquires Heptagon

ams Acquires Heptagon

Austria-based ams is to acquire 100% of the shares in Heptagon dealing with optical packaging and micro-optics, that, in turn, has acquired MESA imaging an couple of years ago. Heptagon’s headquarters and manufacturing are based in Singapore while its R&D center is in Rueschlikon, Switzerland, and Silicon Valley, USA. The company has over 830 employees including around 120 engineers and 500 manufacturing staff. Heptagon's IP portfolio, primarily in optical packaging, includes more than 250 patent families.

Heptagon’s current 12 month revenue run rate is around USD 90m at negative operating profitability due to current underutilization of production capacity. Heptagon expects substantial revenue growth over the coming years starting mid-year 2017, based on its existing revenue and capacity pipeline and customer commitments. To prepare for this expected growth, Heptagon has already embarked on a major expansion of its Singapore manufacturing capacity with a total capital investment of more than USD 250m in 2016/2017. The expansion is based on a confirmed customer commitment for usage of the additional capacity and is fully funded from existing cash in the business, requiring no funding by ams.

The transaction combines an upfront consideration in cash and shares with a substantial deferred earn-out consideration. The upfront consideration includes USD 64m in cash from available funds, a capital increase of 15% of outstanding shares from authorized capital (excluding subscription rights) and shares from currently held treasury shares for a total value of the upfront consideration of approx. USD 570m. The earn-out consideration will be contingent on future results of Heptagon’s business over fiscal year 2017 with a potential maximum value of USD 285m. Following the upfront share transaction, current Heptagon shareholders which include financial investors, management, and employees are expected to hold around 20% in ams. The transaction is expected to close within the next three months.

Tech.eu estimates the total acquisition price at up to $919 million, or approximately €845 million, if Heptagon hits the targets set for its next fiscal year. "Heptagon has raised tens of millions over the years, including from GGV Capital, Innovations Kapital of Sweden, Innovacom of France, Nokia Growth Partners, High Tech Private Equity, Credence Partners, Jolt Capital, AAC Technologies and Heliconia Capital Management."

Alexander Everke, CEO of ams, commented on the transaction, “Combining ams and Heptagon creates the clear #1 in optical sensing technologies and fast-tracks our innovation capabilities. As a result, we expect ams to drive the optical sensing agenda in the years to come and broaden its market reach. Together with our leadership position in our other sensing focus areas Environmental, Imaging, and Audio, this strategic transaction is going to transform ams into the global leader in sensor solutions.

Update: Antonio Avitabile, Sony Europe, comments at LinkedIn: "congrats Bo [Ilsoe] and Heptagon team. I understand cash was 64mln USD for an almost 1bln USD deal. Great job on the buying side."

Omnivision Announces Low Power 1/3-inch 1080p30 Sensor for Security Applications

PRNewswire: OmniVision announce OV2735, a 1080p30 image sensor for mainstream security and surveillance cameras. "The 1/3-inch 1080p sensor is the most popular format in the mainstream surveillance camera market. Given its excellent performance and advanced features, the OV2735 is one of the most competitive drop-in camera solutions available for this segment," said Chris Yiu, senior strategic marketing manager at OmniVision. "The OV2735 leverages our proven OmniPixel3-HS™ technology to deliver low-light sensitivity in a more compact and power-efficient package."

The OV2735 is currently available for sampling, and is expected to enter volume production in Q1 2017.

Movidius VPU to Appear in Hikvision Smart Cameras

Movidius’ Myriad 2 Vision Processing Unit (VPU) technology will be powering a new lineup of Hikvision smart cameras. “Advances in artificial intelligence are revolutionizing the way we think about personal and public security” says Movidius CEO, Remi El-Ouazzane “The ability to automatically process video in real-time to detect anomalies will have a large impact on the way cities infrastructure are being used. We’re delighted to partner with Hikvision to deploy smarter camera networks and contribute to creating safer communities, better transit hubs and more efficient business operations.

Thanks to Deep Neural Networks and stereo 3D sensing, Hikvision has been able to achieve up to 99% accuracy in their advanced visual analytics applications. Some of these applications include: car model classification, intruder detection, suspicious baggage alert, and seatbelt detection. The Myriad 2 platform allows these functions to now be processed instantaneously onboard the camera, rather than being sent to the cloud for processing.

There are huge amounts of gains to be made when it comes to neural networks and intelligent camera systems” says Hikvision CEO, Hu Yangzhong. “With the Myriad 2 VPU we’re able to make our analytics offerings much more accurate, flagging more events that require a response, while reducing false alarms. Embedded, native intelligence is a major step towards smart, safe and efficiently run cities. We will build a long term partnership with Movidius and its VPU roadmap.


Update: One of the cool looking Hikvision smart cameras with Movidius technology inside:

Framos Compares ON Semi Automotive Sensors

Framos publishes EMVA 1288 comparison of ON Semi older AR0132 and new BSI AR0136 automotive HDR CMOS sensors, both are 1.2MP and 3.75um pixel:

e2v Launches Improved GS Sensors

e2v has launches Emerald family of CMOS sensors featuring what is claimed to be the world’s smallest true global shutter pixel available on the market today (2.8µm). The DSNU of the new sensors is 10 times improved compared to other e2v CMOS products. This allows cameras to perform better in high temperatures and enables long exposures to be used in low-light applications such as microscopes or outdoor cameras for surveillance, speed and traffic applications.

e2v’s Emerald family comprises a 16MP (4096 x 4096 pixels), which is the first to be released, a 12MP (4096 x 3072 pixels) and a 8MP (4096 x 2160 pixels). These high resolution formats are a world first and include a one inch optical format, which can be interfaced with a compact C Mount lens.

Gareth Powell, Marketing Manager for Professional Imaging at e2v, said, “Our new Emerald sensors use advanced CMOS image process technology and pixel design to offer a low noise global shutter pixel, with an electro-optical performance tuned to meet the demanding requirements of the machine vision industry. The sensors deliver Quantum Efficiency of more than 65 %, a full well capacity of over 7ke, a typical temporal noise of 4e- and a new low-noise mode to offer typically around 2e-.

The whole Emerald family features the same pixel, processing, readout structures and ceramic Land Grid Array (LGA) package to simplify integration, helping to lower development costs for camera makers. The new products have dedicated embedded features including HDR modes, 8/10/12 bit ADCs, high speed outputs (60fps at 10 bits, full 16MP resolution), and a number of power saving modes.


e2v publishes a Youtube video on Emerald features:

Fraunhofer Promises Automotive SPAD-based LiDAR in 2018

Fraunhofer Institute works on SAPD LiDAR for autonomous cars which, in theory, could prevent accidents like Tesla crash:

“A camera’s accuracy depends very much on the lighting available. In this case, it failed. The radar system recognized the obstacle, but couldn’t locate it precisely and mistook the truck for a road sign,” says Werner Brockherde, head of the CMOS Image Sensors business unit at the Fraunhofer Institute for Microelectronic Circuits and Systems IMS in Duisburg.

The researchers have dubbed the new generation of sensors “Flash LiDAR.” They are composed of photodiodes developed at Fraunhofer IMS known as single photon avalanche diodes (SPAD) “Unlike standard LiDAR, which illuminates just one point, our system generates a rectangular measuring field,” Brockherde explains.

“The first systems with our sensors will go into production in 2018,” Brockherde says.


Fraunhofer LiDAR SPAD sensor

ON Semi Powers Light's Multi-Aperture Camera

BusinessWire: ON Semiconductor helps Silicon Valley start-up Light providing its image sensors for L16 multi-aperture camera. Through close collaboration with Light, ON Semiconductor supplies specially customized sensor devices, based on its 1/3.2-inch format AR1335 CMOS sensor product. With up to 10 sensor devices capturing image data simultaneously, Light's L16 camera is supposed to deliver an impressive 52MP resolution, plus over 5X optical zoom without any degradation in image quality.

Meanwhile, Light publishes some info on calibration and alignment of its 16-sensor camera:

"A camera that has one optical path worries less about what is “true” or “real” because there is only one truth, one reality. This reality can be objectively tested and optimized, but it requires adjusting only one path.

A multi-aperture camera with sixteen optical paths (apertures + mirrors + sensors) contends with sixteen realities. In order to merge those realities to create one truth (final image), the camera needs to know precisely where each optical path is relative to the others.

In Light’s Palo Alto office, we’ve been using a specially-designed calibration box to “teach” each L16 prototype where all of its optical paths are relative to the others and relative to the world it will capture. This allows the sixteen paths to behave as one - maintaining the same consistency as a camera with only one optical path.
"

Light L16 calibration box

Chipworks Estimates iPhone 7 Camera Cost at 9.5% of BOM

Chipworks-TechInsights' iPhone 7 reverse engineering report estimates that camera and imaging functions costs about 9.5% of the BOM:

All-New Tesla Autopilot Has 8 Cameras

All-New Tesla Autopilot Has 8 Cameras

Tesla announces that all its new cars will be equipped with its own design of autopilot hardware that will eventually provide fully autonomous driving:

"Eight surround cameras provide 360 degree visibility around the car at up to 250 meters of range. Twelve updated ultrasonic sensors complement this vision, allowing for detection of both hard and soft objects at nearly twice the distance of the prior system. A forward-facing radar with enhanced processing provides additional data about the world on a redundant wavelength, capable of seeing through heavy rain, fog, dust and even the car ahead.

To make sense of all of this data, a new onboard computer with more than 40 times the computing power of the previous generation runs the new Tesla-developed neural net for vision, sonar and radar processing software.
"

"Teslas with new hardware will temporarily lack certain features currently available on Teslas with first-generation Autopilot hardware, including some standard safety features such as automatic emergency braking, collision warning, lane holding and active cruise control. As these features are robustly validated we will enable them over the air, together with a rapidly expanding set of entirely new features. As always, our over-the-air software updates will keep customers at the forefront of technology and continue to make every Tesla, including those equipped with first-generation Autopilot and earlier cars, more capable over time."



Update: SeekingAlpha publishes its interpretation of Tesla announcement and 2.0 autopilot version development.

ST ToF Sensor in iPhone 7

Chipworks discovered a ToF proximity sensor, apparently made by ST, next to the front camera in iPhone 7:

"...when we looked at the selfie camera side and took out the sub-assembly, both the ambient light sensor and the LED/sensor module were different from those in the 6s model.

When we took them off and looked at the module, it looked very STMicroelectronics-ish to us. Looking at the die, it is not the same, but definitely similar in style and die numbering (S2L012AC) to the VL53L0/S3L012BA die with the two SPAD arrays, however this time the LED is bonded on top of the ToF die to give a very compact module.

Based on this we think it is safe to conclude that the proximity sensor is now a ToF sensor that can also act as an accurate rangefinder for the selfie camera. It was also in the 7 Plus, so a good design win for STMicroelectronics. So far nothing has been announced by either Apple or STMicroelectronics, but it is yet another one of the subtle improvements that we see in the evolution of mobile phones.
"

Jean-Luc Jaffard Joins Chronocam

Jean-Luc Jaffard Joins Chronocam

Chronocam, a Paris-based developer of event-driven vision sensors, announces that Jean-Luc Jaffard has been named as VP Sensor to the company. Jaffard brings more than 30 years’ experience in the chip industry, including a lengthy career developing the imaging business at ST Microelectronics.

Thanks to PD for the info!
EMVA 1288 Update Released

EMVA 1288 Update Released

EMVA 1288 Release 3.1 standard is to include a new template for machine vision camera datasheet, among other improvements:

"The new release is now open for public review and discussion and will become the official release 3.1 on December 30, 2016, if no objections are filed.

The new release contains only a few refinements and additions, because release 3.0 proved already to be a robust and stable release. The major progress is the new data template sheet. This makes it easy to compare the main features of cameras with data summarized in a standardized way on a single page. The two other major additions are: total SNR curve including the spatial non-uniformities, and diagrams of horizontal and vertical profiles for illustration of the spatial non-uniformities. The document can be downloaded from EMVA’s website at
http://www.emva.org/standards-technology/emva-1288/emva-standard-1288-downloads/"

Thanks to TL for the info!
IS Americas Interviews Google Image Scientist

IS Americas Interviews Google Image Scientist

Image Sensor Americas conference to be held on October 25-26, 2016 in San Francisco, publishes an interview with Jonathan Phillips, Staff Image Scientist at Google. Jonathan talks about the main mobile imaging challenges:

"From an image quality standpoint, capturing images at low light continues to be a challenge for not only the image sensor but also the image processing. Fewer photons means less signal for the hardware, so the software has to do more to generate good images. Another challenge is the latency and accuracy of autofocus. Camera users want to capture in-focus memories without having a lag after pushing the shutter button. With moving objects and camera handshake, this isn't always easy."