Skip to content

Blogs

Causes of 3D Vertigo and Tips for alleviating 3D vertigo(Part2)

三、 Specific measures to reduce the discomfort of doctors during 3D endoscopic surgery:       First, for doctors themselves, we need to improve surgeons’ awareness of the possibility of discomfort during the first operation. As mentioned above, most of doctors experienced discomfort such as dizziness during the first 3D laparoscopy. However, starting from the second case, the incidence and severity dropped sharply. The discomfort gradually disappears as the number of surgeries increases. This shows that repeated practice can improve dizziness, so we can train doctors on the operation of 3D endoscopic camera systems to improve their adaptability to 3D images and surgical operation skills, thereby reducing discomfort. Secondly, doctors need to rest and adjust in time: during the operation, if they feel uncomfortable, they should rest in time, close their eyes and take a deep breath, or temporarily leave the screen to help balance their body. At the same time, the operation time should be arranged reasonably to avoid long-term continuous surgery. In addition to the above measures, we also need to pay attention to the precautions for the use of camera system components:     (1) 3D endoscope:     Ensure that the endoscope is accurately positioned when inserted into the patient’s body cavity to avoid unnecessary movement. Check the endoscope lens regularly to prevent dust and contamination from affecting the imaging effect. During surgery, avoid impact and bending the axis when operating the endoscope.     (2) Camera system:     With a high-resolution camera, ensure that the captured image is clear and stable. According to the doctor’s personal visual habits and surgical needs, reasonably adjust the parameter settings of the 3D endoscope camera system, such as brightness, contrast, etc., to reduce visual fatigue and dizziness and adapt to different surgical environments.     (3) Monitor     Factors affecting the visual comfort zone include but are not limited to objective and adjustable factors such as the size, resolution, and viewing distance of the 3D monitor. Therefore, when actually using the camera system, the 3D display should be adjusted according to the height and field of view of each surgeon. Ensure that the display is at the same level as the doctor’s line of sight, and the upper and lower viewing angles are controlled within 30 degrees to reduce the occurrence of dizziness.     (4) Image processing system:      Ensure that the image processing system is stable and efficient, and can process and convert image data in real time. Regularly check the software and hardware versions of the system, and update and upgrade them in a timely manner to ensure the stability and compatibility of the system. DownLoad

Read More >

Causes of 3D Vertigo and Tips for alleviating 3D vertigo(Part1)

Preface:With the rapid development of medical technology, 3D endoscopic surgery has gradually been popularized and recognized in clinical applications. As an important tool for modern surgical operations, the 3D endoscopic camera system has brought revolutionary changes to surgical operations with its unique three-dimensional stereoscopic imaging technology and high-definition images. However, relevant studies have shown that doctors experience discomfort such as vertigo when using the system for the first time. This phenomenon has aroused widespread concern: Why does the advanced 3D endoscopic surgical system cause vertigo? How can we effectively alleviate this problem? This issue will introduce the advantages of 3D endoscopic camera system surgery, explore the reasons why doctors may experience discomfort such as vertigo during 3D endoscopic surgery, and discuss relevant methods to alleviate the symptoms. 一、 The clinical advantages of 3D endoscope camera system are mainly reflected in the following aspects: Preoperative: accurate diagnosis and planning to reduce surgical risks     3D endoscope camera system can restore the real three-dimensional vision for doctors. Compared with traditional 2D endoscope, it can more intuitively observe the location of lesions, hierarchical relationships and the depth relationship between tissues and organs. The intuitive visual effect helps doctors make more accurate diagnoses before surgery, so as to formulate more reasonable surgical plans. Intraoperative: Improve surgical accuracy, success rate and safety:     3D images can help doctors more accurately identify the shape, structure and traction direction of tissues and lesions during surgery. This helps doctors avoid damage to small objects such as blood vessels and nerves during surgery and reduce surgical trauma. 3D endoscopic camera systems perform particularly well in operations with complex anatomical structures, such as minimally invasive surgery, organ resection and vascular connection, which improves the success rate of surgery and the survival rate of patients to a certain extent. Reduce learning barriers     For young doctors, the 3D endoscope camera system can provide more parameter image information, which helps them master surgical skills faster and improve surgical skills. This will further reduce the learning barriers of young doctors and shorten the learning curve. 二、Causes of vertigo caused by the use of 3D endoscopic camera systems and factors affecting the degree of vertigo Causes of vertigo:     We know that the human body’s sense of balance refers to the body’s perception of its position, posture and relationship with the surrounding environment in motion or at rest, which is composed of vestibular sense, vision and proprioception. Sensory information is integrated through afferent nerves to the vestibular nuclei, and then transmitted to the balance center of the cerebellum and hypothalamus to maintain the body’s balance. When the sense of balance deviates, the human body will have a wrong perception of its own balance state, and in severe cases, it will produce strong symptoms of vertigo.     Dizziness occurs when using a 3D camera system because the motion information of the visual input does not match the motion information of the vestibular input, which is essentially a functional balance disorder. When using the system, the doctor’s body usually remains still, the proprioceptive position sense perceives stillness, and the vestibular sense does not “collect” motion information. However, the high-definition 3D realistic images allow the doctor’s vision to perceive that the objects in front of them are constantly moving, thus creating the illusion that they are also moving. This information mismatch between vision and vestibular sense causes confusion in the vestibular nuclei when integrating information, and the balance center cannot issue correct instructions, which in turn causes symptoms of dizziness. At the same time, different people have different tolerances and their performance is also inconsistent. Therefore, different people have different levels of dizziness when watching 3D videos. Factors affecting the degree of dizziness: (1) Image features: Adding images: Brightness and chromaticity: The human visual system is very sensitive to color and brightness. When viewing visual content, uneven color distribution and content that is too bright or too dark will produce an uncomfortable experience when viewing, which will cause dizziness; Parallax gradient: Images with rich image content are more likely to cause dizziness when viewing than images with simple content Temporal mutation: The intensity of the movement of the image content also reflects the degree of mismatch between vision and vestibule to a certain extent. Rapid and sudden movement changes are more likely to cause dizziness. (2) Due to the particularity of the surgery, doctors need to stare at a high-brightness, high-contrast screen for a long time, which not only puts a burden on the eyes, but also easily causes visual fatigue and dizziness. In addition, factors such as parallax, frame rate, delay, field of view and focal length in the 3D endoscopic surgery system may also exacerbate the contradiction between vision and motion perception, further causing or aggravating dizziness. In the next article we will continue to discuss measures to reduce discomfort…… DownLoad

Read More >

3D Technology Principles — Polarization 3D display technology

1.1 Mainstream technologies for 3D display terminals There are many mainstream technologies for 3D display terminals. Let’s take a look at some common methods. Anaglyphic 3D:Chromatic aberration 3D first separates the spectral information through a rotating filter wheel, uses filters of different colors to filter the image, and then uses red and blue glasses to make the left and right eyes receive different signals. Advantage :The technology is of low difficulty and low cost. Disadvantages :The 3D image quality is not ideal, and the image and screen edges are prone to color cast. Active Shutter 3D:Using a pair of active LCD shutter glasses, the left and right eyes are switched on and off alternately, so that the left and right eyes see two images, thus creating a 3D depth perception of a single image. The refresh rate is required to reach 120Hz. Advantage :No loss of resolution, good light and dark effects. Disadvantages :The glasses are heavy and need to be charged regularly. Polarization 3D:Adding staggered polarizing film on the LCD surface makes the polarization of the output image present different states. When using polarizing glasses, you can feel the three-dimensional effect. Advantage :The glasses are thin and cheap. Disadvantages :The resolution is halved and the brightness is reduced. Glasses-free 3D:Through screen display technology, without any tools, the left and right eyes can see two pictures with time difference and difference on the display screen, thus giving people a sense of three-dimensionality. Advantage :No glasses required. Disadvantages :There are requirements for the viewing position, and the autostereoscopy that uses eye-capturing technology works best only when one person is watching. 1.2 Polarization 3D display technology Next, let’s take a closer look at the principle of polarization 3D display   FPR (Film Pattern retarder) is a layer of retarder film precisely attached to the surface of the panel. Its function is to receive polarized light in a certain direction emitted by the display panel, rotate the polarized light directions of different areas into different directions using the liquid crystal molecular layer, and then filter out the two polarized lights through polarized glasses, thereby achieving a 3D effect.       As shown in the following principle diagram, by adding ±1/4 wave plate polarizers to the 2D display module, one row of pixels produces left-handed light and one row of pixels produces right-handed light; at the same time, the same polarizer is also added to the polarized glasses, the left eye can only receive left-handed light, and the right eye can only receive right-handed light, so that the left and right eyes receive corresponding images respectively, achieving a 3D stereoscopic effect. Complementary color format:Taking red and green as an example, the two complementary color channels each store one channel of video information, with no loss of resolution, but it is easy to cause dizziness. When using polarized 3D display technology, you need to pay attention to the viewing angle of the polarized display. There is a certain distance between the 3D polarizer and the pixel point. When the upward and downward offset is large, it will cause the left eye’s pixel to enter the right polarizer film, and then enter the right eye, causing crosstalk. Minimum viewing distance :0.7m Optimal viewing distance :1.5m Viewing angle :Vertical ±15° Finally, let me introduce polarized glasses:It is composed of left and right lenses with polarization directions perpendicular to each other. It uses polarizers to filter light that originally vibrates in different directions. It blocks light that is inconsistent with the direction of the polarizer film and only allows light in the same direction as the polarizer film to pass through, thus generating parallax. The brain synthesizes a three-dimensional image. Note: For polarized 3D glasses, when lenses with different polarization directions are superimposed, the light transmittance decreases significantly. Advantages of polarized 3D display technology No flicker, can show very comfortable 3D images for the eyes Wide viewing angle, no need for eye tracking, suitable for multiple people to watch Enjoy 3D images with light and comfortable glasses Experience 3D images without overlapping images Experience high-definition 3D images without image dragging   Disadvantages of polarized 3D display technology Brightness and resolution are somewhat reduced Vertical viewing angle is narrow Requires attachment process, and the attachment alignment accuracy is high FPR film cost is high DownLoad

Read More >

3D Technology Principles — 3D display technology principle

1. 3D display technology principle 3D display technology: uses a series of optical methods to create parallax between the left and right eyes so that people receive different images, forming a 3D stereoscopic effect in the brain.3D display technology involves video generation and display in the entire link of video production, storage/transmission, and display.   1.1 3D video shooting and production There are two types of 3D video shooting and production: real 3D and digital 3D Real 3D:Two lenses are used to simulate the human eye, and the left and right eye images are captured separately, and then the images are synthesized through a computer to achieve a 3D effect. Digital 3D:Use a single-channel lens to shoot ordinary pictures, then use digital technology to post-process the 2D images taken by the single-channel lens to produce left and right eye pictures, and then use a computer to synthesize the pictures to achieve a 3D effect   1.2 3D video formats Regarding 3D video formats, there are four common forms:Half-width and high-height composite format, full-width and high-height composite format, interlaced format, complementary color format Half-width and high-height composite format:The synchronized left and right images are compressed to 1/2 in the horizontal/vertical direction, and the other direction remains unchanged. Then the two videos are synthesized side by side in the form of left-right/top-bottom. The images are stretched and restored during playback. The format after this synthesis is exactly the same as 2D video in data storage and decoding. The existing decoding software and hardware can be directly used, and it has good versatility. The disadvantage is that the horizontal/vertical resolution is reduced by half. Full-width and high-height composite format:A format that combines two videos side by side in left-right/top-bottom format with synchronized left and right images at their original resolution. This format maintains the original resolution of each video, but the composite video is non-standard, has poor compatibility, and has high requirements for playback devices. Interleaved format:The strip interlaced format is a format that interlaces the synchronized left and right images in the vertical direction, and directly displays a stereo frame in an interlaced field. The advantage is that it does not require high requirements for the playback equipment, and it is better to play it in two separate channels without setting up decoding, and it can practice continuous broadcast, mixed broadcast and video information settings. The disadvantage is that the resolution is reduced by half. Complementary color format:Taking red and green as an example, the two complementary color channels each store one channel of video information, with no loss of resolution, but it is easy to cause dizziness.   1.3 3D video input and display The input and display of 3D video also includes two modes: Da Vinci mode and 3D display mode   1.4 3D display input and display 3D display interlaced according to input mode . The host outputs video streams in multiple formats: (1) Top-bottom 3D: binocular interweaving into one 4K image (2) Left-right 3D: binocular interweaving into one 4K image (3) Interlaced: binocular interlacing into one 4K image (4) Simultaneous (SIMUL): binocular independent transmission   1.5 Adjustment for 3D display Parallax formation: The image positions of the same target in the left and right sensors are somewhat different, and vary with the distance. Parallax adjustment: By shifting the left and right eyes in opposite directions, you can adjust binocular parallax and change the sense of depth. DownLoad

Read More >

3D Technology Principles —— Binocular Stereo Vision Principle

1.Human binocular stereo vision When humans look at the world around them, they can not only see the width and height of objects, but also know their depth and can judge the distance between objects or between the viewer and the object. The main reason for this three-dimensional visual characteristic is that people usually always view objects with both eyes at the same time. Due to the distance between the visual axes of the two eyes (about 65 mm), the left eye and the right eye receive different visual images when looking at objects at a certain distance (i.e., parallax). Therefore, the brain combines the information of the two images through the movement and adjustment of the eyeballs to produce a sense of stereo. That is, binocular parallax produces three-dimensionality.   2.3D display Therefore, if you want people to see 3D images, you must let the left eye and the right eye see different images, so that there is a certain gap between the two images, simulating the actual situation of human eyes watching, so as to show the 3D stereoscopic feeling. Taking a 3D display as an example, suppose there are three points displayed on the screen: A, B, and C, and each point has two image pairs, left and right, to display the left and right eye images respectively: The left eye image Al and the right eye image Ar of point A coincide on the screen. At this time, both eyes see the same image point. The brain will judge that the position of point A is at A’ which coincides with the horizontal plane of the screen; The left and right eye images of points B and C do not coincide on the screen. Through the 3D display, the left and right eye images can only be seen by the corresponding eyes. The visual nervous system of the observer’s brain will analyze and fuse the two eye images to form the spatial images of points B and C, and judge that the position of point B is at B’ behind the horizontal plane of the screen, and the position of point C is at C’ in front of the horizontal plane of the screen; 3D display screens use the binocular parallax of the human eye to create a sense of depth, or three-dimensionality, at points A, B, and C. The pixel difference between a pair of images on the display screen is called horizontal parallax; The left-eye image and the right-eye image at point B are located on the left and right sides of the screen, respectively, which is called positive parallax (sunken into the screen); on the contrary, the left-eye image at point C is located on the right side of the screen, and the right-eye image is located on the left side of the screen, which is called negative parallax (protruding from the screen); and when the images seen by the left and right eyes at point A overlap, the horizontal parallax is 0, which is called zero parallax (located on the screen).   3.3D Video 3D video uses the principle of stereoscopic imaging to allow the left and right eyes to view images with horizontal parallax, and people form a sense of three-dimensionality through the judgment and analysis of the brain. However, unlike the way we watch in real life, when watching 3-D videos, our eyes must focus on the screen, but objects converge in front of or behind the screen, which results in a convergence-accommodation conflict. When the convergence-accommodation conflict exceeds a certain range, when the horizontal parallax is too large and exceeds the comfort zone, people will feel dizzy and uncomfortable. If the horizontal parallax is too small, the stereoscopic effect will be weakened. Therefore, maintaining a reasonable parallax range is a necessary condition for watching 3-D videos comfortably and obtaining the best 3-D effect. Factors that affect the visual comfort zone include the size, resolution, viewing distance, and interpupillary distance of the 3D display. In addition, since the visual comfort zone is a subjective statistical concept, for a single user, due to individual differences in age, pupillary distance, vision, etc., people’s sense of stereoscopic comfort also varies from person to person. By adjusting the display parallax, 3D videos can be adapted to various viewing conditions, while enhancing their stereoscopic effect without causing eye fatigue or dizziness, and users can adjust the stereoscopic effect according to their personal preferences to achieve the best viewing effect.   Thanks for watching. DownLoad

Read More >

ICG and Near-infrared fluorescence imaging(Part II)

Near-infrared fluorescence imaging is a technology that uses near-infrared light to excite and emit fluorescence signals to achieve imaging. Through this technology, we can observe fine structures and biomolecule information that cannot be seen in the while light mode. Let’s take a closer look at the principles and applications of near-infrared fluorescence imaging! 1.Principle of ICG Endoscopy System Endoscopic surgery lacks tactile sensation and relies heavily on visual effects to feedback different types of tissue structures. Through NIR fluorescence endoscopy, real-time imaging of tissues and vessels underneath can be achieved to improve the visibility under the endoscope during surgery . NIR light (700-1000 nm) can penetrate deeply into tissues and has little interference from tissue autofluorescence. NIR fluorescent contrast agents maximize signal-to-background ratio and enhance contrast in different tissue types. Fluorescence signals are merged with normal RGB color video, allowing direct anatomical localization intraoperatively.   2.the mechanism of ICG staining   3.Schematic diagram of ICG organ and tissue imaging process   4.Schematic diagram of ICG sentinel lymph node imaging process   Thanks for watching. DownLoad

Read More >

ICG and Near-infrared fluorescence imaging(Part I)

Near-infrared fluorescence imaging is a technology that uses near-infrared light to excite and emit fluorescence signals to achieve imaging. Through this technology, we can observe fine structures and biomolecule information that cannot be seen in the while light mode. Let’s take a closer look at the principles and applications of near-infrared fluorescence imaging! Different image modes of 2nd Generation of Hikimaging MIK5 Fluorescence Solution 1.Principles of near-infrared fluorescence imaging The main absorption peaks of hemoglobin and water are in the visible light region and the infrared region, and there is a light absorption trough in the near-infrared 650-900nm range [1], yet human tissue would absorb and radiate near-infrared light weaker than visible light. Therefore, by choosing light in the near-infrared 650-900nm range as excitation light and fluorescence, deep tissue (1-2cm) images can be obtained. [1] R. A Weissleder, Nat. Biotechnol., 2001, 19, 316-317. ICG(Indocyanine Green,)       ICG is one kind of near-infrared fluorescent dye approved by FDA for in vivo use. Among all NIR fluorescent probe types, ICG is the only clinical diagnostic NIR fluorescent material approved for liver function and ophthalmic angiography.    Advantage Both excitation light and fluorescence are in the near-infrared region and are within the optical window of the tissue, allowing NIR fluorescence imaging. Dry ICG is stable and easy to store Low toxicity, rapid excretion, effective binding to blood lipoproteins, will not leak from the blood circulation, and is safer for patients Easy to prepare, mass-produced, low cost Adsorption phenomenon of ICG fluorescent molecules The amphiphilic nature of ICG lipid allows it to enter the human body through intravenous injection and other methods, and can be free or combined with plasma proteins and transported throughout the body through blood and tissue fluid. Indocyanine Green and its fluorescent reaction Fluorescent molecules located in the lowest energy state (ground state), when irradiated by 700-800nm excitation light, absorb photon energy and are excited to transition to the excited state (absorption process). Within the fluorescence lifetime, fluorescent molecules transition downward from the excited state to the ground state (emission process) with a certain probability, and emit fluorescence in the 820-900nm range. The fluorescence characteristics are shown in the figure below. The light curve is the excitation light absorption response curve, and the dark curve is the fluorescence emission intensity curve. However, in actual scenarios, the fluorescence intensity will be significantly lower than the excitation light intensity. 2.The impact of ICG concentration Lots of researches have been done to illustrate that the fluorescence intensity produced by ICG is related to various factors such as ICG concentration and solvent. Effect of concentration: If the concentration of ICG is too high, the ICG molecules in the solution will clump together to produce polymer ICG (the aggregation phenomenon of ICG fluorescent molecules). The fluorescence yield of polymer ICG is extremely low. At the same time, the presence of polymer ICG in the solution will cause free ICG molecules to form. The probability of receiving excitation light is reduced. Solvent influence: Different solvents have different ICG concentrations at which the fluorescence intensity reaches the maximum value. Therefore, higher concentrations are not always better, It is recommended to choose the appropriate concentration and injection method according to the clinical scenario. Appendex: Introduction of ICG(Indocyanine Green)       ICG is one kind of near-infrared fluorescent dye approved by FDA for in vivo use. Among all NIR fluorescent probe types, ICG is the only clinical diagnostic NIR fluorescent material approved for liver function and ophthalmic angiography. Physical properties: 1.Appearance: dark green cyan or dark brown red powder; 2.Polycyclic structure, and a sulfate group is connected to each polycyclic structure. The polycyclic structure has strong lipophilicity, while sulfuric acid, solubility: soluble in water, DMSO, methanol, ethanol (slightly soluble); 3.Lipid-water amphiphilicity: The two ends of a carbon bond in the ICG molecule are connected to two groups and have a certain degree of water solubility. This complex molecular structure ultimately leads to the ICG molecule being amphipathic, that is, it is both lipophilic and hydrophilic. Advantage Both excitation light and fluorescence are in the near-infrared region and are within the optical window of the tissue, allowing NIR fluorescence imaging. Dry ICG is stable and easy to store Low toxicity, rapid excretion, effective binding to blood lipoproteins, will not leak from the blood circulation, and is safer for patients Easy to prepare, mass-produced, low cost Disadvantage Unstable performance in aqueous solutions and when exposed to light Thanks for watching. DownLoad

Read More >

Imaging parameters introduction(Part 2)

Color is an important indicator of endoscopic systems. Do you know what factors affect the color of endoscopic system and how to adjust system parameters to obtain the image color you want? Let’s learn more 1. Color Mode In image processing, common color models include HSB, RGB, CMYK, CIEXYZ, CIELUV, CIELAB, etc. In the HSB color model, hue, saturation, and brightness are the basic descriptions of image properties. Hue is the color reflected from or transmitted through an object. Hue is measured by position on the standard color wheel from 0° to 360°. Typically, hue is identified by a color name, such as red, orange, or green. Saturation refers to the intensity or purity of a color. Saturation represents the proportion of the gray component of the hue, measured as a percentage from 0% (gray) to 100% (a fully saturated pure color). Lightness is the relative lightness or darkness of a color, usually measured as a percentage from 0% (black) to 100% (white). 2.Color Gamut Color gamut Color gamut refers to the range area formed by the number of colors that a certain device can express, that is, the range of colors that various screen display devices, printers or printing devices can express. The colors of the visible spectrum in nature constitute the largest color gamut space of all colors that the human eye can see. The size of the device color gamut space is related to the device, medium, and observation conditions. A device with a larger color gamut displays more colors. The CIE-xy chromaticity diagram developed by the CIE International Association of Illumination is used to visually express the color gamut. In the CIE-xy chromaticity diagram, the range of color gamuts that various display devices can express can be represented by a triangle area composed of RGB three-point lines. The larger the area of the triangle, it means the range of color gamut of the display device. bigger. BT Standard BT.2020: In 2013, the BT.2020 standard for a new generation of UHD (Ultra-high definition) video production and display system promulgated by the International Telecommunication Union Radiocommunication Sector (ITU-R), redefined TV broadcasting and consumption Various parameters and indicators of ultra-high-definition video display in the electronic field promote the further standardization of 4K UHD home display equipment, which can display high-density dark green, orange, etc. BT2020 is more suitable for outdoor scenes BT709: It is a standard for high-definition (1080P) video production and display systems released by ITU-R. BT709 is more in line with the color requirements of doctors for scenes in human body chambers. 3.Color Temperature and White Balance Color Temperature Color temperature is a physical quantity used in lighting optics to define the color of a light source. That is, when a black body is heated to a temperature, and the color of the light emitted by it is the same as that emitted by a light source, the temperature at which the black body is heated is called the color temperature of the light source, or color temperature for short. Its unit is represented by “K” (Kelvin temperature unit). The color rendering index of the light source(CRI) The effect of a light source on the color appearance of an object compared to a standard reference light source. The closer the CRI is to 100, the better the color rendering of the light source. The lower the CRI, the darker the object color. White balance is closely related to color temperature Images under different color temperature light sources will show different degrees of color cast. “Low color temperature, warm colors; high color temperature, cool colors”. The human eye has color constancy and can adapt to different color temperature environments. Therefore, unless the color temperature of the light source is very extreme, the human eye will automatically adjust the same white object to white when viewed under different light (color temperature). White balance is to simulate the ability of the human eye to automatically adjust the color and restore the actual color of the object. That is, “Regardless of any light source, white objects can be restored to white.” 4. Other Color Factors Gamma The broad definition of the gamma value is the power exponential relationship between the input value and the output value, which is used to compensate for the nonlinear perception of natural brightness by the human eye. Gamma adjustment can convert the linear information obtained by the photosensitive element into non-linear information similar to the human eye after adjustment. Adjusting the gamma can change the details of the output picture to a certain extent. Contrast Image contrast refers to the measurement of the different brightness levels between the brightest white and the darkest black in the light and dark areas of an image, that is, the size of the grayscale contrast of an image. Generally speaking, the higher the contrast, the clearer and more eye-catching the image, and the brighter the color; while the lower the contrast, the whole picture will be gray. Gain Gain adjustment is to adjust the upper limit value that limits the amplification of image signals. The gain control is the upper limit of the signal gain. The higher the gain, the more sensitive the light sensor, the higher the brightness of the image, and the noise will be amplified. Generally, it is recommended to increase it in scenes with insufficient illumination, and it is recommended to lower it in scenes with strong point light sources to suppress overexposure of point light sources. 5.Color Reproduction of Endoscopy System The degree of color reproduction is to adjust the white balance, contrast and chroma to make the color of the image generally consistent with the color of the original scene, and the effect is restored to the most suitable feeling for human eyes. For example, chroma adjustment: adjust the brightness of the color to make the image color fuller and closer to reality Contrast adjustment: Make the image more layered and enhance the visual effect. Thanks for reading, if you want to know more

Read More >

Imaging principle (Part 1)

There are many image adjustment parameters in an endoscope system. Do you know what these parameters mean and how adjusting these parameters will affect the image? Let’s learn more. 1. Main parameters of the image: pixels and resolution Highest pixls The highest pixel value is the real pixel of the photosensitive device, and this data usually includes the non-imaging part of the photosensitive device. In Pixels. Effective pixels Effective pixels are the number of pixels in the real image, in units of Pixels. Resolution Resolution refers to the image quality record index used in the process of acquisition, transmission and display, as well as the inherent screen structure of the display device itself to express the fineness of the image. Available “Vertical pixels x Horizontal pixels” to quantify. 4K & 1080P The meaning of “P” “P” represents the total number of rows of video pixels. 1080P stands for 1080 lines of pixels. 1080P is usually 1920x1080Pixel. The meaning of “K”? “K” represents the total number of columns of video pixels. 4K stands for 4000 columns of pixels. 4K is usually 4096x2160Pixel or 3840x2160Pixel. 2. Clarity Clarity Clarity refers to the ability of the imaging device to express the clarity of the imaging picture and display some detailed information; Clarity reflects the clarity of the image seen by the human eye macroscopically, and is the subjective feeling of people on the final image caused by the comprehensive result of the objective performance of the system and equipment; Clarity indicators generally use SFR (Spatial Frequency Response), MTF (Modulation Transfer Function) and TV–Line It can be measured by the thickness of black and white lines, and there are standard test methods and test charts. The measurement data has a clear unit, that is, TV line 3. Sharpness Sharpness Sharpness is an parameter that reflects the sharpness of the image plane and the sharpness of the image edge. The adjustment of sharpness is to adjust the sharpness of the edge of the image. When the sharpness is increased, the details on the image plane are higher and it looks clearer. However, too high sharpness will distort the image, the edges of the object will be jagged, and it will also increase noise. Therefore, it is recommended that the scene with sufficient brightness can be properly improved to enhance the details, and the scene with insufficient brightness is recommended to reduce and improve the noise. Thanks for reading, if you want to know more information, please contact us: link jump DownLoad

Read More >

Contact Sales

Let’s know about your issue so we can connect you with the right team quickly.