Internet Photogrammetry for Inspection of Seaports

Total Page:16

File Type:pdf, Size:1020Kb

Load more

POLISH MARITIME RESEARCH Special Issue 2017 S1 (93) 2017 Vol. 24; pp. 174-181 10.1515/pomr-2017-0036 INTERNET PHOTOGRAMMETRY FOR INSPECTION OF SEAPORTS Zygmunt Paszotta1 Malgorzata Szumilo1 Jakub Szulwic2 1 Univeristy of Wamia and Mazury in Olsztyn, Poland 2 Gdansk University of Technology, Poland ABSTRACT This paper intends to point out the possibility of using Internet photogrammetry to construct 3D models from the images obtained by means of UAVs (Unmanned Aerial Vehicles). The solutions may be useful for the inspection of ports as to the content of cargo, transport safety or the assessment of the technical infrastructure of port and quays. The solution can be a complement to measurements made by using laser scanning and traditional surveying methods. In this paper the authors recommend a solution useful for creating 3D models from images acquired by the UAV using non-metric images from digital cameras. The developed algorithms, created and presented software allows to generate 3D models through the Internet in two modes: anaglyph and display in shutter systems. The problem of 3D image generation in photogrammetry is solved by using epipolar images. The appropriate method was presented by Kreiling in 1976. However, it applies to photogrammetric images for which the internal orientation is known. In the case of digital images obtained with non-metric cameras it is required to use another solution based on the fundamental matrix concept, introduced by Luong in 1992. In order to determine the matrix which defines the relationship between left and right digital image it is required to have at least eight homologous points. To determine the solution it is necessary to use the SVD (singular value decomposition). By using the fundamental matrix the epipolar lines are determined, which makes the correct orientation of images making stereo pairs, possible. The appropriate mathematical bases and illustrations are included in the publication. Keywords: inspection of wharf, security of seaports, registration of cargo ships, UAV (unmanned aerial vehicles) INTRODUCTION spatial observations. A tool like a photogrammetry may be useful for evaluating and measuring tanks, silos and special During recent years the use of unmanned aerial vehicles constructions in the ports [1, 7, 12-15, 17, 28, 30]. The main (UAVs) has increased significantly. UAVs with the capability limitation of the method is range of UAV and availability of of photogrammetric data acquisition opens various new satellite positioning system [11]. applications in aerial and terrestrial photogrammetry There are also used groups of drones which take images and also introduces low-cost alternatives to the classical synchronously, can analyze filling ?, moving the autonomous photogrammetry [25, 26]. An extensive overview of the objects and inspect moving and vibrating parts of vessels. evolution and the state-of-the-art of photogrammetry by Emergency service might be another area of the potential using UAV is given by Eisenbeiss and Unger [4, 31]. The application where rescue teams might need quick and portable authors describe early developments and present a review access to UAV data. The examining stereoscopic model for the on UAVs for photogrammetric topics like flight planning, purpose of identifying objects and judging their significance image acquisition and orientation and data processing. In is much easier than plane image interpretation [21, 22]. this paper the authors focus on generating 3D images from In traditional photogrammetry (which is based on images taken from UAVs through the Internet. photogrammetric images) generating stereoscopic models The solution is based on the website photogrammetry and is a public process. The method of analogue spatial can be used for inspection of ports, wharves, coasts, and ships. visualization in stereoscopic entertainment mediums such Construction of stereoscopic model, optionally enriched as Kaiserpanorama (consisted of sets of stereo slides and with scale imaging, allows for a correct identification and a multi-station viewing apparatus) has been known since the 174 POLISH MARITIME RESEARCH, No S1/2017 mid 19th century. In the literature source [20] four conditions THEORETICAL FUNDAMENTALS are mentioned for the analogue way of obtaining the 3D effect. They refer to: The concept of a fundamental matrix is introduced, – parallactic angle, assuming that homologous points lie on the epipolar plane – the difference of scales of the left and right image, as the starting point. – separation of transmission channels for left and right According to Fig. 1, the following equation must be images (sending the “right eye” image to the right eye satisfied: and the “left eye” image to the left eye), – watching pictures on the epipolar plane. r "⋅(b ×r') = 0 (1) When all the above mentioned conditions are met, to t obtain 3D images from non-metric images will be possible. where: The first condition is satisfied if the images acquired during the photogrammetric flight have appropriate forward overlap. ijk ª0 − brzy'+bryz'º ª 0 − bbz y ºªrx 'º « » « »« » The images taken when the UAVs have only turned around, b × r'= bbx y bz = « brzx'+0 − brxz' » = « bz 0 − bx »«ry '» = Br' (2) « » « »« » are improper. In this case the base which represents the rx ' ry ' rz ' ¬− bryx'+brxy'−0¼ ¬− bby x 0 ¼¬rz '¼ distance between the projection centres of both images is too small. Moreover, these are not overlapping images. If Q is the rotation matrix for vector r" to the object The second condition (scale stability) shouldbe guaranteed coordinate system (O1,x,y,z), i.e.: for in-flight plane. The next condition depends on the selection of 3D display techniques. In this paper the anaglyph r "= Qr", (3) method and active shutter 3D systems are implemented. The t theoretical basis of the fourth condition is described in detail then the scalar product is given as : in this work beneath. T T T A breakthrough in the process of 3D image generation was ()rt " Br'= (r") Q Br'= 0 . (4) made in 1976 when Kreiling developed a method of generating epipolar images [3, 9, 27, 33]. However, it is possible to project Let the two cameras be characterized by two calibration stereogram images onto the common plane if the camera matrices A1 and A 2 (containing only the principal distance): constants and the elements of relative orientation are known. Generating and superimposing component digital images is rp '= Ar1 ' rp"= Ar2 " . (5) also a public process. Many functions and actions which perform measurements and computations, must be programmed. It appears that it is possible to generate 3D images (the pair of images which relative orientation is known and which could be visualized as a stereoscopic model) with the use of the idea of anaglyphic images on the Internet, interactively. It is possible to reach such solution on an ordinary computer, but the quality of the so obtained three-dimensional image is not sufficient. The generation and visualization of 3D images by using shutter glasses is a more advanced approach. Both of the mentioned solutions work as an Internet application in JAVA and use client-server technology, which in practical terms means communication between applets and the servlet. This paper presents the theoretical fundamentals for the adopted solution, together with a plan for its implementation. Since a few years ago (from about 2001) up to now these authors have developed interactive Internet photogrammetric applications [22, 23]. The described research is a summary of work performed on non-metric images. Due to the use of JAVA software , the presented solution Fig. 1. Geometrical relationships on a stereogram might be used in most of the contemporary web browsers. The authors propose a solution which is available through web browsers as a portable tool and convenient alternative to therefore: standalone desktop applications. The use of Internet solution T T T −1 allows to make advantage of all the merits of this technology, (rp") A2 Q B A1 rp '= 0 (6) such as universal access to the data, simultaneous use by multiple users. POLISH MARITIME RESEARCH, No S1/2017 175 T T −1 Let F = A2 Q B A1 to be called a fundamental matrix. It number of non-zero elements on the diagonal of diagonal helps to determine epipolar lines. For the point with the matrix V. In order to obtain rank (M)=8, or rather rank coordinates rp" on the right image, the epipolar line on the left (M’)=8, the least singular value should be equal to zero. image is described by the following equation (in homogeneous If our measurements of the coordinates of homologous points coordinates): were correct, there should be a sharp jump between the eighth and the ninth singular value. The corrected matrices V and rp"⋅u"= 0 , where u"= Frp ' (7) M will be marked V’ and M’, respectively , where : T T −1 T The shape of the matrix F = A2 Q B A1 is 3×3. There are M'= S'V'D' (11) nine coefficients to determine. One of them is connected with scale factor and hence it could be reduced, which makes that only eight coefficients have to be calculated. To determine them, at least eight homologous points are needed. This algorithm was introduced by Longuet-Higgins [16, 24]. Therefore Eq. (6) can be written as follows : T (rp ") F rp '= 0 (8) All the epipolar lines on a given image plane, either left or right, go through one epipole. They make up a pencil of lines which meet in the epilpole (epipolar pencil). If they are marked e p ' and e p", respectively, then the following relationships are satisfied [6]: Fe '= 0 e "F = 0 (9) p , p IMPLEMENTATION OF THE APPLICATION Fig. 2. Activity diagram in the process of generating anaglyphic 3D images Implementation of the method in question requires the proper software and execution of various operations both on the part of the client and of the server.
Recommended publications
  • Evaluating Stereoacuity with 3D Shutter Glasses Technology Huang Wu1*, Han Jin2, Ying Sun3, Yang Wang1, Min Ge1, Yang Chen1 and Yunfeng Chi1

    Evaluating Stereoacuity with 3D Shutter Glasses Technology Huang Wu1*, Han Jin2, Ying Sun3, Yang Wang1, Min Ge1, Yang Chen1 and Yunfeng Chi1

    Wu et al. BMC Ophthalmology (2016) 16:45 DOI 10.1186/s12886-016-0223-3 RESEARCH ARTICLE Open Access Evaluating stereoacuity with 3D shutter glasses technology Huang Wu1*, Han Jin2, Ying Sun3, Yang Wang1, Min Ge1, Yang Chen1 and Yunfeng Chi1 Abstract Background: To determine the stereoacuity threshold with a 3D laptop equipped with 3D shutter glasses, and to evaluate the effect of different shape and size of test symbols and different type of disparities to stereoacuity. Methods: Thirty subjects with a visual acuity in each eye of at least 0 logMAR and a stereoacuity of at least 32 arcsec (as assessed in Fly Stereo Acuity Test) were recruited. Three target symbols—tumbling "E", tumbling "C", and "□"—were displayed, each with six different sizes representing a visual acuity ranging from 0.5 to 0 logMAR when tested at 4.1 m, and with both crossed and uncrossed disparities. Two test systems were designed - fixed distance of 4.1 m and one for variable distance. The former has disparities ranging from 10 to 1000 arcsec. Each subject completed 36 trials to investigate the effect of different symbol sizes and shapes, and disparity types on stereoacuity. In the variable distance system, each subject was tested 12 times for the same purposes, both proximally and distally (the point where the 3D effect just appears and where it just disappears respectively), and the mean value was calculated from the mean proximal and distal distances. Results: No significant difference was found among the groups in the fixed distance test system (Kruskal-Wallis test; Chi-square = 29.844, P = 0.715).
  • Real-Time 2D to 3D Video Conversion Using Compressed Video Based on Depth-From Motion and Color Segmentation

    Real-Time 2D to 3D Video Conversion Using Compressed Video Based on Depth-From Motion and Color Segmentation

    International Journal of Latest Trends in Engineering and Technology (IJLTET) Real-Time 2D to 3D Video Conversion using Compressed Video based on Depth-From Motion and Color Segmentation N. Nivetha Research Scholar, Dept. of MCA, VELS University, Chennai. Dr.S.Prasanna, Asst. Prof, Dept. of MCA, VELS University, Chennai. Dr.A.Muthukumaravel, Professor & Head, Department of MCA, BHARATH University, Chennai. Abstract :- This paper provides the conversion of two dimensional (2D) to three dimensional (3D) video. Now-a-days the three dimensional video are becoming more popular, especially at home entertainment. For converting 2D to 3D, the conversion techniques are used so able to deliver the 3D videos efficiently and effectively. In this paper, block matching based depth from motion estimation and color segmentation is used for presenting the video conversion scheme i.e., automatic monoscopic video to stereoscopic 3D.To provide a good region boundary information the color based segmentation is used for fuse with block-based depth map for assigning good depth values in every segmented region and eliminating the staircase effect. The experimental results can achieve 3D stereoscopic video output is relatively high quality manner. Keywords - Depth from Motion, 3D-TV, Stereo vision, Color Segmentation. I. INTRODUCTION 3DTV is television that conveys depth perception to the viewer by employing techniques such as stereoscopic display, multiview display, 2D-plus depth, or any other form of 3D display. In 2010, 3DTV is widely regarded as one of the next big things and many well-known TV brands such as Sony and Samsung were released 3D- enabled TV sets using shutter glasses based 3D flat panel display technology.
  • Vers Un Modèle Unifié Pour L'affichage

    Vers Un Modèle Unifié Pour L'affichage

    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by Savoirs UdeS VERS UN MODÈLE UNIFIÉ POUR L’AFFICHAGE AUTOSTÉRÉOSCOPIQUE D’IMAGES par Clément Proust Mémoire présenté au Département d’informatique en vue de l’obtention du grade de maître ès sciences (M.Sc.) FACULTÉ DES SCIENCES UNIVERSITÉ DE SHERBROOKE Sherbrooke, Québec, Canada, 24 février 2016 Le 24 février 2016 Le jury a accepté le mémoire de Clément Proust dans sa version finale Membres du jury Professeur Djemel Ziou Directeur Département d’informatique de l’Université de Sherbrooke Professeur Bessam Abdulrazak Président-rapporteur Département d’informatique de l’Université de Sherbrooke Professeur Marie–Flavie Auclair–Fortier Membre interne Département d’informatique de l’Université de Sherbrooke i Sommaire Dans un premier chapitre, nous décrivons un modèle de formation d’image affichée sur un écran en revenant sur les concepts que sont la lumière, la géométrie et l’optique. Nous détaillons ensuite les différentes techniques d’affichage stéréoscopique utilisées à l’heure actuelle, en parcourant la stéréoscopie, l’autostéréoscopie et plus particuliè- rement le principe d’imagerie intégrale. Le deuxième chapitre introduit un nouveau modèle de formation d’image stéréoscopique. Ce dernier nous permet d’observer deux images d’une paire stéréoscopique soumises à des éventuelles transformations et à l’effet d’une ou de plusieurs optiques particulières, pour reproduire la perception de trois dimensions. Nous abordons l’aspect unificateur de ce modèle. En effet il permet de décrire et d’expliquer de nombreuses techniques d’affichage stéréoscopique exis- tantes. Enfin, dans un troisième chapitre nous discutons d’une méthode particulière de création de paires d’images stéréoscopiques à l’aide d’un champ lumineux.
  • The Effect of Audio Cues and Sound Source Stimuli on Looming

    The Effect of Audio Cues and Sound Source Stimuli on Looming

    The Effect of Audio Cues and Sound Source Stimuli on Looming Perception Sonia Wilkie Submitted in partial fulfilment of the requirements of the Degree of Doctor of Philosophy School of Electronic Engineering and Computer Science Queen Mary, University of London January 28, 2015 Statement of Originality I, Sonia Wilkie, confirm that the research included within this thesis is my own work or that where it has been carried out in collaboration with, or supported by others, that this is duly acknowledged below and my contribution indicated. Previously published material is also acknowledged below. I attest that I have exercised reasonable care to ensure that the work is original, and does not to the best of my knowledge break any UK law, infringe any third partys copyright or other Intellectual Property Right, or contain any confidential material. I accept that the College has the right to use plagiarism detection software to check the electronic version of the thesis. I confirm that this thesis has not been previously submitted for the award of a degree by this or any other university. The copyright of this thesis rests with the author and no quotation from it or in- formation derived from it may be published without the prior written consent of the author. Signature: Date: Abstract Objects that move in depth (looming) are ubiquitous in the real and virtual worlds. How humans interact and respond to these approaching objects may affect their continued survival in both the real and virtual words, and is dependent on the individual's capacity to accurately interpret depth and movement cues.
  • 3D Television - Wikipedia

    3D Television - Wikipedia

    3D television - Wikipedia https://en.wikipedia.org/wiki/3D_television From Wikipedia, the free encyclopedia 3D television (3DTV) is television that conveys depth perception to the viewer by employing techniques such as stereoscopic display, multi-view display, 2D-plus-depth, or any other form of 3D display. Most modern 3D television sets use an active shutter 3D system or a polarized 3D system, and some are autostereoscopic without the need of glasses. According to DisplaySearch, 3D televisions shipments totaled 41.45 million units in 2012, compared with 24.14 in 2011 and 2.26 in 2010.[1] As of late 2013, the number of 3D TV viewers An example of three-dimensional television. started to decline.[2][3][4][5][6] 1 History 2 Technologies 2.1 Displaying technologies 2.2 Producing technologies 2.3 3D production 3TV sets 3.1 3D-ready TV sets 3.2 Full 3D TV sets 4 Standardization efforts 4.1 DVB 3D-TV standard 5 Broadcasts 5.1 3D Channels 5.2 List of 3D Channels 5.3 3D episodes and shows 5.3.1 1980s 5.3.2 1990s 5.3.3 2000s 5.3.4 2010s 6 World record 7 Health effects 8See also 9 References 10 Further reading The stereoscope was first invented by Sir Charles Wheatstone in 1838.[7][8] It showed that when two pictures 1 z 17 21. 11. 2016 22:13 3D television - Wikipedia https://en.wikipedia.org/wiki/3D_television are viewed stereoscopically, they are combined by the brain to produce 3D depth perception. The stereoscope was improved by Louis Jules Duboscq, and a famous picture of Queen Victoria was displayed at The Great Exhibition in 1851.
  • Stereoscopic 3D in Games

    Stereoscopic 3D in Games

    DEGREE PROJECT, IN COMPUTER SCIENCE , SECOND LEVEL STOCKHOLM, SWEDEN 2015 Stereoscopic 3D in games HENNING BENNET AND DAVID LINDSTRÖM KTH ROYAL INSTITUTE OF TECHNOLOGY SCHOOL OF COMPUTER SCIENCE AND COMMUNICATION (CSC) EXAMENSARBETE VID CSC, KTH Stereoskopisk 3D i spel Stereoscopic 3D in games Bennet, Henning & Lindström, David E-postadress vid KTH: [email protected] & [email protected] Exjobb i: Datalogi Program: Civilingenjör Datateknik Handledare: Arnborg, Stefan Examinator: Arnborg, Stefan Uppdragsgivare: Fabrication Games Datum: 2015-06-17 Sammanfattning Stereoskopisk 3D i spel I den här rapporten undersöks stereoskopisk 3D. Vi utreder hur ett spel ska anpassas för att ta fram en så bra och tydlig stereoskopisk 3D-effekt som möjligt och så att betraktaren upplever ett tydligt djup utan att uppleva ett obehag på grund av effekten. Rapporten tittar djupare på vilka tekniska aspekter man behöver ta hänsyn till vid spelutveckling i stereoskopisk 3D. Samt vilka prestandabegränsningar som man bör ta hänsyn till vid stereoskopisk 3D. Vi beskriver hur processen och framtagandet av prototypen Kodo med anaglyfisk stereoskopisk 3D såg ut. Prototypen togs fram för att testa och analysera resultatet av stereoskopisk 3D-effekten. Abstract Stereoscopic 3D in games In this report we investigate the technique of stereoscopic 3D. This report investigates the steps needed to create a game adapted for an improved stereoscopic 3D effect. Furthermore we investigate what improvements one should make to avoid the beholder to experience any discomfort due to the effect. The report talks about technical aspects one needs to consider when using stereoscopic 3D, as well as performance issues we might need to take into consideration.
  • The Effect of Audio Cues and Sound Source Stimuli on Looming

    The Effect of Audio Cues and Sound Source Stimuli on Looming

    The Effect of Audio Cues and Sound Source Stimuli on Looming Perception Sonia Wilkie Submitted in partial fulfilment of the requirements of the Degree of Doctor of Philosophy School of Electronic Engineering and Computer Science Queen Mary, University of London January 28, 2015 Statement of Originality I, Sonia Wilkie, confirm that the research included within this thesis is my own work or that where it has been carried out in collaboration with, or supported by others, that this is duly acknowledged below and my contribution indicated. Previously published material is also acknowledged below. I attest that I have exercised reasonable care to ensure that the work is original, and does not to the best of my knowledge break any UK law, infringe any third partys copyright or other Intellectual Property Right, or contain any confidential material. I accept that the College has the right to use plagiarism detection software to check the electronic version of the thesis. I confirm that this thesis has not been previously submitted for the award of a degree by this or any other university. The copyright of this thesis rests with the author and no quotation from it or in- formation derived from it may be published without the prior written consent of the author. Signature: Date: Abstract Objects that move in depth (looming) are ubiquitous in the real and virtual worlds. How humans interact and respond to these approaching objects may affect their continued survival in both the real and virtual words, and is dependent on the individual's capacity to accurately interpret depth and movement cues.
  • The Current State of the Consumer 3D Experience

    The Current State of the Consumer 3D Experience

    The Current State of the Consumer 3D Experience By Philip Lelyveld Program Manager, Consumer 3D Experience Lab / Program USC Entertainment Technology Center June 7, 2012 Table of Contents Introduction 3 Theatrical 3D 4 3D TVs 7 3D Cameras, Laptops, Phones, Tablets 12 3D Gaming 13 Head Mounted Displays 15 Theme Parks 15 Other Markets for 3D 16 Conclusion: Looking Backward, Looking Forward 18 Appendix 1 – 3D Lab Presentation / Tour Log 20 Appendix 2 – 3D Lab Event Speaker / Moderator Log 21 Appendix 3 – 3D Audio 22 Appendix 4 – 3D Printers 24 2 The Current State of the Consumer 3D Experience Introduction The introduction of non-anaglyph, digital S3D as a consumer experience started with Disney’s Chicken Little in 2005. But the 2009 theatrical release of Avatar, with James Cameron’s strong marketing of the 3D aspect of the feature, was the benchmark event that defined consumer expectations for a 3D experience. Mr. Cameron and company also laid the groundwork for the rapid decent into the trough of disillusionment by telegraphing what consumers should expect before those expectations could be adequately met. At that moment the production, theatrical exhibition, distribution, and consumer electronics technologies infrastructures were still being developed, and content creation was just ramping up. The ETC’s Consumer 3D Experience Lab and Program has ridden the heart of the Hype Cycle; from a few years after digital stereoscopic 3D’s (S3D’s) launch in cinemas through the peak of inflated expectations, down to the trough of disillusionment, and now upward along the slope of enlightenment. We are approaching, but not yet at, the plateau of productivity for 3D.
  • Learning in Virtual 3D Environments: All About Immersive 3D Interfaces

    Learning in Virtual 3D Environments: All About Immersive 3D Interfaces

    LEARNING IN VIRTUAL 3D ENVIRONMENTS: ALL ABOUT IMMERSIVE 3D INTERFACES Vojtěch Juřík1,2, Čeněk ŠašInka1,2 1Department of Psychology, Faculty of Arts, Masaryk University, Arna Nováka 1/1, 602 00 Brno, Czech Republic 2HUME lab - Experimental Humanities Laboratory, Masaryk University, Arna Nováka 1/1, 602 00 Brno, Czech Republic Abstract Human-computer interaction have entered the 3D era. The use of immersive virtual 3D environments in the area of education and learning is important field of research interest. 3D virtual worlds can be dynamically modified, updated or customized and more operators can cooperate when solving specific task regardless to their physical presence. With respect to the features of immersive human-computer interaction as well as specific cues included in virtual environments it is necessary to consider every aspect of 3D interface in the way how it influence the process of learning. Especially, the social and communicational aspects in Collaborative Virtual Learning Environments (CVLEs) is the state of the art for the current research. Based on our research of interactive geographical 3D environments, we summarize and discuss the relevant theoretical and technological aspects entering into the issue of interaction with 3D virtual environments. The operators' manipulation and evaluation of the displayed content is discussed regarding such phenomena as fidelity, presence/immersion, informational and computational equivalence, situation awareness, cognitive workload or human error. We also describe specific interface developed for recording and measuring the participants’ behavior in virtual spaces. Further we suggest the methodological background for the research of 3D virtual interfaces with respect to virtual collaboration and learning. Keywords: 3D technology, situation awareness, user interface, human error, human-computer interaction, fidelity, presence 1 INTRODUCTION With the rapid development of informational technologies, the communication and collaboration in virtual worlds (VWs) have entered a 3D era (Boughzala et al., 2012).
  • WO 2013/158322 Al 24 October 2013 (24.10.2013) P O P C T

    WO 2013/158322 Al 24 October 2013 (24.10.2013) P O P C T

    (12) INTERNATIONAL APPLICATION PUBLISHED UNDER THE PATENT COOPERATION TREATY (PCT) (19) World Intellectual Property Organization International Bureau (10) International Publication Number (43) International Publication Date WO 2013/158322 Al 24 October 2013 (24.10.2013) P O P C T (51) International Patent Classification: (74) Agent: STEIN, Michael, D.; Woodcock Washburn LLP, G06T 15/08 (201 1.01) Cira Centre, 2929 Arch Street, 12th Floor, Philadelphia, PA 19104-2891 (US). (21) International Application Number: PCT/US20 13/032821 (81) Designated States (unless otherwise indicated, for every kind of national protection available): AE, AG, AL, AM, (22) Date: International Filing AO, AT, AU, AZ, BA, BB, BG, BH, BN, BR, BW, BY, 18 March 2013 (18.03.2013) BZ, CA, CH, CL, CN, CO, CR, CU, CZ, DE, DK, DM, (25) Filing Language: English DO, DZ, EC, EE, EG, ES, FI, GB, GD, GE, GH, GM, GT, HN, HR, HU, ID, IL, IN, IS, JP, KE, KG, KM, KN, KP, (26) Publication Language: English KR, KZ, LA, LC, LK, LR, LS, LT, LU, LY, MA, MD, (30) Priority Data: ME, MG, MK, MN, MW, MX, MY, MZ, NA, NG, NI, 61/635,075 18 April 2012 (18.04.2012) US NO, NZ, OM, PA, PE, PG, PH, PL, PT, QA, RO, RS, RU, RW, SC, SD, SE, SG, SK, SL, SM, ST, SV, SY, TH, TJ, (71) Applicant: THE REGENTS OF THE UNIVERSITY TM, TN, TR, TT, TZ, UA, UG, US, UZ, VC, VN, ZA, OF CALIFORNIA [US/US]; 1111 Franklin Street, ZM, ZW. Twelfth Floor, Oakland, CA 94607 (US). (84) Designated States (unless otherwise indicated, for every (72) Inventors: DAVIS, James E.; 1156 High Street: SOE3, kind of regional protection available): ARIPO (BW, GH, Santa Cruz, CA 95064 (US).
  • The Difference of Distance Stereoacuity Measured with Different Separating Methods

    The Difference of Distance Stereoacuity Measured with Different Separating Methods

    468 Original Article Page 1 of 9 The difference of distance stereoacuity measured with different separating methods Lingzhi Zhao1, Yu Zhang2, Huang Wu2, Jun Xiao3 1Department of Medical Equipment, 2Department of Optometry, 3Department of Medical Retina, the Second Hospital of Jilin University, Changchun 130041, China Contributions: (I) Conception and design: H Wu, J Xiao; (II) Administrative support: L Zhao; (III) Provision of study materials or patients: All authors; (IV) Collection and assembly of data: All authors; (V) Data analysis and interpretation: H Wu, J Xiao; (VI) Manuscript writing: All authors; (VII) Final approval of manuscript: All authors. Correspondence to: Jun Xiao. Department of Medical Retina, the Second Hospital of Jilin University, No. 218, Ziqiang Street, Nanguan District, Changchun 130041, China. Email: [email protected]. Background: The majority of tests to evaluate stereopsis should separate two eyes first. Whether different binocular separating manner may affect the test result of stereopsis is the main purpose of this study. Red- green anaglyphs, polarized light technology, active shutter 3D system, and auto-stereoscopic technique were chosen to evaluate distance stereoacuity. Methods: Red-green anaglyphs test system was established with an ASUS laptop with the aid of TNO Stereotest glasses. Active shutter 3D system was set up with the same ASUS laptop with the aid of NVidia 3D Vision 2 Wireless Glasses Kit. The polarized 3D system adopted the AOC display. A Samsung naked- eye 3D laptop was used to set up an auto-stereoscopic system. Thirty subjects were recruited. Distance stereoacuity was measured with those computer systems. Results: The auto-stereoscopic system was failed to measure distance stereopsis.
  • Book V Camera

    b bb bbbera bbbbon.com bbbb Basic Photography in 180 Days Book V - Camera Editor: Ramon F. aeroramon.com Contents 1 Day 1 1 1.1 Camera ................................................ 1 1.1.1 Functional description ..................................... 2 1.1.2 History ............................................ 2 1.1.3 Mechanics ........................................... 5 1.1.4 Formats ............................................ 8 1.1.5 Camera accessories ...................................... 8 1.1.6 Camera design history .................................... 8 1.1.7 Image gallery ......................................... 12 1.1.8 See also ............................................ 14 1.1.9 References .......................................... 15 1.1.10 Bibliography ......................................... 16 1.1.11 External links ......................................... 17 2 Day 2 18 2.1 Camera obscura ............................................ 18 2.1.1 Physical explanation ...................................... 19 2.1.2 Technology .......................................... 19 2.1.3 History ............................................ 20 2.1.4 Role in the modern age .................................... 31 2.1.5 Examples ........................................... 32 2.1.6 Public access ......................................... 33 2.1.7 See also ............................................ 33 2.1.8 Notes ............................................. 34 2.1.9 References .......................................... 34 2.1.10 Sources