Physics For Engineers - 1

Retardation Plate

4 min read
Retardation Plate: The retardation plate or the phase shifter is an optical component which can rotate or modify the plane of polarization of the incident beam falling on to it. In its simplest form, the retardation plate consists of a uniaxial crystal cut in such a way that the the optics axis or symmetry axis lies in the plane of the plate. If a retardation plates introduces a phase change of or (p being an integer) in the two mutually orthogonal components of electric field associated with the light wave then it is called as half wave plate . As in such case the relative path difference introduced in the two components is or half the wavelength. If the corresponding phase difference introduced is or , then it is called as quarter wave plate as in this situation the relative path difference introduced is . With in the retardation plate, the optics axis and the axis normal to it are called as slow axis and the fast axis. The slow axis will be the axis for which refractive index is large and the fast axis is the axis for which the refractive index is small and hence the speed of propagation will be small in former and large in later case. Let us consider a half wave plate such that optics axis is making an angle of with the y axis and is placed in y-z plane. A linearly polarized plane wave is allowed to fall at normal incidence on this wave plate as shown in Fig 1. The direction of propagation of this wave is along x axis.The wave is polarized along y direction and its electric field can be represented as (1)

Introduction Of Theory Of Relativity

2 min read
Introduction of Theory of Relativity: We are familiar with the word ‘motion'. In every day life we see the motion of several objects around us. If we have been asked to define motion we would say “change of position with time”. In an attempt to define motion we have used two concepts space (position) and time. By our intuition, we know what is space and time and these are defined as follows according to Newton's view. Space is absolute, in the sense that it exists permanently and independently whether any matter in the space or moving through it. Thus space is a sort of three dimensions matrix into which one can place objects or through which objects can move without producing any interaction between space and object. Each object in the universe exists at a particular point in space at a particular time. An object in motion undergoes continuous change of position with time. Time in Newton's view is also absolute and flows on without regard to any physical object or event. One can neither speed up the time nor slow down its rate. The flow of time exists uniformly through the universe. If we imagine the instant “now”, it occurs simultaneously on every planet and star in the universe. The time interval between two events is same everywhere in the universe, it can be verified by observing of physical, chemical and biological events. We have defined space and time, with our everyday knowledge about nature and surroundings. However there are some contradictions with our intuition, when the motion of an object is at very high speed, approaching or equaling the speed of light. It was Einstein who exposed some of the most important limitations of classical ideas including that of Newton. Einstein contribution led to the development of special theory of relativity (STR).

Inertial & Non-inertial Frames

5 min read
Frames of References: When someone say 'the bus is moving', we would be certain that what is being described is a change of position of bus with respect to earth's surface or any building, tree etc which are fixed to the earth. We accept the local surroundings - a collection of objects attached to earth and therefore at rest relative to each other as our frame of reference. It is clear that the choice of particular frame of reference is just a matter of convenience. It is often helpful to use a frame of reference in which the description of motion is simplest. Within a frame of reference, we set up a coordinate system, which is used to manage the position of an object. To specify the position of any object, we use three numbers. The choice of number is determined by the type of coordinate system we use. The generally used coordinate systems are rectangular coordinates (x, y, z), spherical polar coordinates () and cylindrical coordinates (). Inertial Frame and Galilean Transformation: The reference frames, which we described earlier such as earth, buildings, trees, etc., are called inertial frames. Here an object remains at rest if it is not influenced by any other forces i.e. forces arising from its interaction with other objects. Imagine a football kept on a playground, the ball remains at rest unless you kick or pick the ball (in the absence of wind). Here, the playground can be taken as an inertial frame. If one inertial frame is identified, then any other frame moving with constant velocity with respect to inertial frame is also inertial. This argument implies that all un-accelerated frames constitute the class of inertial frames. All inertial frames are equivalent in the sense that any dynamical experiment gives same result in all these frames. Both the values of force and acceleration for an object remain same when we go from one frame to another.

System For Observing Interference Phenomenon: Fresnel Biprism

3 min read
System for observing interference phenomenon: Various systems based on these principles have been designed and are used to observe these phenomenon. These systems find several applications in science and engineering such as measurement of wavelength of a light source, the wavelength difference between two closely separated waves, the optical flatness of surfaces, thickness of the film, refractive index of material etc. We shall be discussing some of these systems and their applications in the following sections. The major systems for observing interference phenomenon are as follows. Fresnel Biprism : A freshnel biprism is a thin double prism placed base to base and have very small refracting angle ( ). This is equivalent to a single prism with one of its angle nearly 179° and other two of each. Here the interference is observed by the division of wave front. Monochromatic light through a narrow slit S falls on biprism ABC , which divides it into two components. One of these component is refracted from portion AC of biprism and appears to come from S1 where the other one refracted through portion BC appears to come from S2. Thus S1 and S2 act as two virtual coherent sources formed from the original source. Light waves arising from S1 and S2 interfere in the shaded region and interference fringes are formed which can be observed by placing a screen MN . If d is the separation between the virtual sources S1and S2, Z1 is separation between source S and biprism and Z2 is the separation between biprism and the screen then

Michelson Interferometer

3 min read
Michelson interferometer: In Michelson interferometer the two coherent sources are derived from the principle of division of amplitude. The parallel light rays from a monochromatic source are incident on beams splitter (glass plate) G1 which is semi silvered on its back surface and mounted at 45° to the axis. Light ray incident ‘O' is refracted into the glass plate and reaches point A , where where it is partially reflected (ray 1) and partially transmitted ray 2. These rays then fall normally on mirrors M1 (movable) and M2 (fixed) and are reflected back. These reflected rays reunite at point A again and follow path AT. Since these two rays are derived from same source(at A) and are therefore coherent, can interfere and form interference pattern. In this geometry, the reflected ray 1, travels an extra optical path, a compensating plate G2 of same thickness as plate G1 ) is inserted in the path of ray 2 such that G2 is parallel to G1 . This introduces the same optical path in glass medium for ray 2 as ray 1 travels in plate G1 (therefore is called a compensating plate). Any optical path difference between the ray 1 and ray 2 is now equal to actual path difference between them. To understand, how the fringes are formed, refer to fig. An observer at 'T' will see the images of mirror M2 and source S ( M'2 and S' respectively) through beam splitter along with the mirror M1. S1 and S2 are the images of source in mirrors M1 and M2 respectively. The position of these elements in figure depend upon their relative distances from point A .

Diffraction By Multiple Slits: Diffraction Grating

3 min read
Diffraction by multiple slits: Diffraction Grating: In earlier lecture, we have seen the effect on intensity distribution when the light waves passe through two nearby narrow slits. As a result of this, the broad principal maximum produced by single slit actually consists of alternate dark and bright region. Now we will see the combined effect of interference and diffraction on intensity distribution, when the light waves are allowed to pass through large number of narrow slits, very close to each other. This arrangement of slits(Fig.12.3.1 ) is known as diffraction grating and finds lots of application in optics. Once again, each of these slits acts as source of secondary wavelengths and we will see again combined effect of interference and diffraction on screen. In this case, the broad principal maxima corresponding to single slit pattern, consists of several principal maxima and very low intensity secondary maxima separated by minima. We assume that width of each slit is d' and the separation between the centers of any two adjacent slits is b'. We will calculate the resultant intensity at the point P on the screen due to N such slits. In order to find intensity distribution, we have to calculate the resultant amplitude of all waves reaching the point P . These wavelets are now originating from N slits and the calculation of the amplitude, as we did in case of single slit and double slit, is complicated. However, we can use the complex amplitude method. Here we note that the phase difference between the waves originating from similar position of two consecutive slits is ( b is separation between the two slits). The amplitude part from each wave is Thus if we have two slits Fig.A

Resolving Power Of Image Forming Systems: Telescope And Microscope

5 min read
Resolving power of image forming systems: Telescope and Microscope: So far we discussed the diffraction pattern due to apertures which were extending in one dimension only. To calculate the resultant amplitude at a point on screen, we had to perform the integration along one direction only. However, in any image forming system, be it our eye, camera lens, telescope or microscope, light enters through a circular aperture followed by a lens which forms the image on the screen. The image of a distant point source is therefore not a point, but a sort of diffraction pattern. In this case, to calculate the resultant intensity distribution we have to perform a double integral for calculating resultant amplitude, which is quite complex. However, in this case, we can intutively feel that the diffraction pattern will be in the form of central circular disk surrounded by dark and bright circles. The condition of minima is given by , where D is the diameter of the aperture and is angular separation of mth order minima from the center. However, in this case m is not an integer as it was for single slit diffraction. Airy showed that the condition for first order minima (first dark circle) is . Telescope When we use a telescope to image two stars, we can determine their angular separation in space. However, whether this resolution is possible, will be determined by the resolving power of telescope. We have seen earlier that two sources can be well resolved if their angular separation is such that the central maxima of one falls on the first minima of other (in other words )

Optics: Diffraction

3 min read
Optics: Diffraction: If an opaque obstacle (or aperture) is placed between a source of light and screen, a sufficiently distinct shadow of opaque (or an illuminated aperture) is obtained on the screen .This shows that the light travels approximately in a straight lines. If, however, the size of obstacle (or aperture) is small (comparable with the wavelength of light), there is a departure from straight line propagation and the light bends round the corners of the obstacles(or aperture) and enters the geometrical shadow. The bending of light at sharp corners/edges is called diffraction. As a result of diffraction, the edges of the shadow (or illuminated region) are not sharp , but the intensity is distributed in a certain way depending upon the nature of obstacle (or aperture). Let us first explain, how the light bends around a sharp corner. According to Huygen's principle, when a wave propagates, each point on its wave front serves as the source of spherical secondary wavelets having same frequency as that of original wave [Fig. 1a]. The resultant at any point afterward is the envelope of these secondary wavelets. However, this picture does not explain the diffraction of light through small apertures. If we assume that, as shown in fig.1b, each unobstructed point of a wavefront, at a given instant, serves as a source of spherical secondary wavefront, the amplitude of the optical field at any point beyond is the superposition of all these wavefronts. The maximum path difference between these secondary wave fronts at any point P is equal to AB . (Path difference will be equal to AB if point P merges with either point A or point B) .

Introduction: Laser

5 min read
INTRODUCTION: LASER: No other scientific discovery of the 20th century has been demonstrated with so manyexciting applications as laser acronym for (Light Amplification by Stimulated Emission of Radiation). The basic concepts of laser were first given by an American scientist, Charles Hard Townes and two Soviet scientists, Alexander Mikhailovich Prokhorov and Nikolai Gennediyevich Basov who shared the coveted Nobel Prize (1964). However, TH Maiman of the Hughes Research Laboratory, California, was the first scientist who experimentally demonstrated laser by flashing light through a ruby crystal, in 1960. Laser is a powerful source of light having extraordinary properties which are not found in the normal light sources like tungsten lamps, mercury lamps, etc. The unique property of laser is that its light waves travel very long distances with e very little divergence. In case of a conventional e source of light, the light is emitted in a jumble of e separate waves that cancel each other at random (Fig. 1.1a) and hence can travel very short distances only. An analogy can be made with a situation where a large number of pebbles are thrown It into a pool at the same time. Each pebble generates a wave of its own. Since the pebbles are thrown at random, the waves generated by all the pebbles cancel each other and as a result they travel a very short distance only. On the other hand, if the pebbles are thrown into a pool one by one at the same place and also at constant intervals of time, the waves thus generated strengthen each other and travel long distances. In this case, the waves are said to travel coherently. In laser, the light waves are exactly in step with each other and thus have a fixed phase relationship . It is this coherency that makes all the difference to make the laser light so narrow, so powerful and so easy to focus on a given object. The light with such qualities is not found in nature.

Numerical Aperture

3 min read
Numerical aperture: The numerical aperture (NA) is the sine of the vertex half-angle of the largest cone of rays that can enter or leave the core of an optical fibre, multiplied by the refractive index of the medium in which the vertex of the cone is located. All values are measured at 850 nm. The value of the numerical aperture is about 5% lower than the value of the maximum theoretical numerical aperture NAtmax which is derived from a refractive index measurements trace of the core and cladding: in which n1 is the maximum refractive index of the core and n2 is the refractive index of the innermost homogeneous cladding. Macrobending loss: For single-mode fibres macrobending loss varies with wavelength, bend radius and number of turns about a mandrel with a specified radius. Therefore, the limit for the macrobending loss is specified in ITU-T Recommendations for defined wavelength(s), bend radius, and number of turns. The recommended number of turns corresponds to the approximate number of turns deployed in all splice cases of a typical repeater span. The recommended radius is equivalent to the minimum bend-radius widely accepted for long-term deployment of fibres in practical systems installations to avoid static-fatigue failure. For multimode fibres the launch condition is of paramount importance for macrobending loss, in particular the presence of higher order modes which are the most sensitive being stripped off due to bending. The mode distribution encountered at a specific macrobend may depend on how many macrobends precede it. For example, the first bend might influence the launch condition at the second bend, and the second bend might influence the launch condition at the third bend, etc. Consequently, the macrobending added loss at a given bend might be different than the macrobending added loss at another bend. In particular, the first bend may have the largest influence on following bends. Consequently, the macrobending added loss produced by multiple bends should not be expressed in the units of “dB/bend” by dividing the total added loss by the number of bends, but in dB for the specified number of bends. Fibre and protective materials: The substances of which the fibres are made should be known because care may be needed in fusion splicing fibres of different substances. However adequate splice loss and strength can be achieved when splicing different high-silica fibres. The physical and chemical properties of the material used for the fibre primary coating and the best way of removing it (if necessary for the splicing of the fibres) should be indicated. The primary coating is made by the layer(s) of protective coating material applied to the fibre cladding during or after the drawing process to preserve the integrity of the cladding surface and to give a minimum amount of required protection (e.g. a 250 μm protective coating). A secondary coating made by layer(s) of coating material can be applied over one or more primary coated fibres in order to give additional required protection or to arrange fibres together in a particular structure, e.g. a 900 μm “buffer” coating, “tight jacket”, or a ribbon coating (see Chapter 2).

Fibre Attributes

4 min read
Fibre attributes: Fibre attributes are those characteristics that are retained throughout cabling and installation processes. The values specified for each type of fibre can be found in the appropriate ITU-T Recommendation for multimode fibre (Recommendation ITU-T G.651.1) or single-mode fibre Recommendations ITU-T G.652, …, G.657. Core characteristics: A value for the core diameter and for core non-circularity is specified for multimode fibres. The core centre is the centre of a circle which best fits the points at a constant level in the near-field intensity pattern emitted from the central region of the fibre, using wavelengths above and/or below the fibre’s cut-off wavelength. Usually the core centre represents a good approximation of the mode field centre. The cladding centre is the centre of a circle which best fits the cladding boundary. The core concentricity error is the distance between the core centre and the cladding centre. The tolerances on the physical dimensions of an optical fibre (core, mode field, cladding) are the primary contributors to splice loss and splice yield in the field. The maximum value for these tolerances (concentricity errors, non-circularities, etc.) specified in ITU-T Recommendations help to reduce systems costs and support a low maximum splice-loss requirement typically around 0.1 dB. Fibres with tightly controlled geometry tolerances will not only be easier and faster to splice, but will also reduce the need for testing in order to ensure high-quality splice performance. This is particularly true when fibres are spliced by passive, mechanical or fusion techniques for both single fibres and fibre ribbons

Basic Concepts: Holography

5 min read
Basic Concepts: holography: TYPES OF HOLOGRAMS: A hologram is a recording in a two- or three-dimensional medium of the interference pattern formed when a point source of light (the reference beam) of fixed wavelength encounters light of the same fixed wavelength arriving from an object (the object beam). When the hologram is illuminated by the reference beam alone, the diffraction pattern recreates the wave fronts of light from the original object. Thus, the viewer sees an image indistinguishable from the original object. There are many types of holograms, and there are varying ways of classifying them. For our purpose, we can divide them into two types: reflection holograms and transmission holograms. A. The reflection hologram The reflection hologram, in which a truly three-dimensional image is seen near its surface, is the most common type shown in galleries. The hologram is illuminated by a “spot” of white incandescent light, held at a specific angle and distance and located on the viewer’s side of the hologram. Thus, the image consists of light reflected by the hologram. Recently, these holograms have been made and displayed in color—their images optically indistinguishable from the original objects. If a mirror is the object, the holographic image of the mirror reflects white light; if a diamond is the object, the holographic image of the diamond is seen to “sparkle.” Although mass-produced holograms such as the eagle on the VISA card are viewed with reflected light, they are actually transmission holograms “mirrorized” with a layer of aluminum on the back. B. Transmission holograms The typical transmission hologram is viewed with laser light, usually of the same type used to make the recording. This light is directed from behind the hologram and the image is transmitted to the observer’s side. The virtual image can be very sharp and deep. For example, through a small hologram, a full-size room with people in it can be seen as if the hologram were a window. If this hologram is broken into small pieces (to be less wasteful, the hologram can be covered by a piece of paper with a hole in it), one can still see the entire scene through each piece. Depending on the location of the piece (hole), a different perspective is observed. Furthermore, if an undiverged laser beam is directed backward (relative to the direction of the reference beam) through the hologram, a real image can be projected onto a screen located at the original position of the object. C. Hybrid holograms Between the reflection and transmission types of holograms, many variations can be made. · Embossed holograms: To mass produce cheap holograms for security application such as the eagle on VISA cards, a two-dimensional interference pattern is pressed onto thin plastic foils. The original hologram is usually recorded on a photosensitive material called photoresist. When developed, the hologram consists of grooves on the surface. A layer of nickel is deposited on this hologram and then peeled off, resulting in a metallic “shim.” More secondary shims can be produced from the first one. The shim is placed on a roller. Under high temperature and pressure, the shim presses (embosses) the hologram onto a roll of composite material similar to Mylar. Integral holograms: A transmission or reflection hologram can be made from a series of photographs (usually transparencies) of an object—which can be a live person, an outdoor scene, a computer graphic, or an X-ray picture. Usually, the object is “scanned” by a camera, thus recording many discrete views. Each view is shown on an LCD screen illuminated with laser light and is used as the object beam to record a hologram on a narrow vertical strip of holographic plate (holoplate). The next view is similarly recorded on an adjacent strip, until all the views are recorded. When viewing the finished composite hologram, the left and right eyes see images from different narrow holograms; thus, a stereoscopic image is observed. Recently, video cameras have been used for the original recording, which allows images to be manipulated through the use of computer software. Holographic interferometry: Microscopic changes on an object can be quantitatively measured by making two exposures on a changing object. The two images interfere with each other and fringes can be seen on the object that reveal the vector displacement. In real-time holographic interferometry, the virtual image of the object is compared directly with the real object. Even invisible objects, such as heat or shock waves, can be rendered visible. There are countless engineering applications in this field of holometry. Multichannel holograms: With changes in the angle of the viewing light on the same hologram, completely different scenes can be observed. This concept has enormous\ potential for massive computer memories. Computer-generated holograms: The mathematics of holography is now well understood. Essentially, there are three basic elements in holography: the light source, the hologram, and the image. If any two of the elements are predetermined, the third can be computed. For example, if we know that we have a parallel beam of light of certain wavelength and we have a “double-slit” system (a simple “hologram”), we can calculate the diffraction pattern. Also, knowing the diffraction pattern and the details of the double-slit system, we can calculate the wavelength of the light. Therefore, we can dream up any pattern we want to see. After we decide what wavelength we will use for observation, the hologram can be designed by a computer. This computer-generated holography (CGH) has become a sub-branch that is growing rapidly. For example, CGH is used to make holographic optical elements (HOE) for scanning, splitting, focusing, and, in general, controlling laser light in many optical devices such as a common CD player.

Signal Loss In Optical Fiber And Dispersion

3 min read
Signal Loss in Multimode and Single-Mode Fiber-Optic Cable: Multimode fiber is large enough in diameter to allow rays of light to reflect internally (bounce off the walls of the fiber). Interfaces with multimode optics typically use LEDs as light sources. However, LEDs are not coherent light sources. They spray varying wavelengths of light into the multimode fiber, which reflects the light at different angles. Light rays travel in jagged lines through a multimode fiber, causing signal dispersion. When light traveling in the fiber core radiates into the fiber cladding (layers of lower refractive index material in close contact with a core material of higher refractive index), higher-order mode loss (HOL) occurs. Together, these factors reduce the transmission distance of multimode fiber compared to that of single-mode fiber. Single-mode fiber is so small in diameter that rays of light reflect internally through one layer only. Interfaces with single-mode optics use lasers as light sources. Lasers generate a single wavelength of light, which travels in a straight line through the single-mode fiber. Compared to multimode fiber, single-mode fiber has a higher bandwidth and can carry signals for longer distances. It is consequently more expensive. For information about the maximum transmission distance and supported wavelength range for the types of single-mode and multimode fiber-optic cables that are connected to line cards on the EX 8200 series switches, see Optical Interface Support in EX 8200 series Switches. Exceeding the maximum transmission distances can result in significant signal loss, which causes unreliable transmission. Attenuation and Dispersion in Fiber-Optic Cable: An optical data link functions correctly provided that modulated light reaching the receiver has enough power to be demodulated correctly. Attenuation is the reduction in strength of the light signal during transmission. Passive media components such as cables, cable splices, and connectors cause attenuation. Although attenuation is significantly lower for optical fiber than for other media, it still occurs in both multimode and single-mode transmission. An efficient optical data link must transmit enough light to overcome attenuation. Dispersion is the spreading of the signal over time. The following two types of dispersion can affect signal transmission through an optical data link: ■ Chromatic dispersion, which is the spreading of the signal over time caused by the different speeds of light rays. ■ Modal dispersion, which is the spreading of the signal over time caused by the different propagation modes in the fiber. For multimode transmission, modal dispersion, rather than chromatic dispersion or attenuation, usually limits the maximum bit rate and link length. For single-mode transmission, modal dispersion is not a factor. However, at higher bit rates and over longer distances, chromatic dispersion limits the maximum link length. An efficient optical data link must have enough light to exceed the minimum power that the receiver requires to operate within its specifications. In addition, the total dispersion must be within the limits specified for the type of link in Telcordia Technologies document GR-253-CORE (Section 4.3) and International Telecommunications Union (ITU) document G.957. When chromatic dispersion is at the maximum allowed, its effect can be considered as a power penalty in the power budget. The optical power budget must allow for the sum of component attenuation, power penalties (including those from dispersion), and a safety margin for unexpected losses.

Link Attributes

2 min read
Link attributes: A concatenated link usually includes a number of spliced factory lengths of optical fibre cable. The characteristics of factory lengths are given in § 6. The transmission parameters for concatenated links must take into account not only the performance of the individual cable lengths but also the statistics of concatenation. The transmission characteristics of the factory length optical fibre cables will have a certain probability distribution which often needs to be taken into account if the most economic designs for the link are to be obtained. Link attributes are affected by factors other than optical fibre cables, by such things as splices, connectors, and installation. Attenuation: Attenuation of a link: The attenuation A of a link is given by: A = α L α S x α C y where: α : typical attenuation coefficient of the fibre cables in a link α S : mean splice loss x : number of splices in a link α C : mean loss of line connectors y : number of line connectors in a link (if provided) L : link length. A suitable margin should be allocated for future modifications of cable configurations (additional splices, extra cable lengths, ageing effects, temperature variations, etc.). The above equation does not include the signal loss of equipment connectors. The attenuation budget used in designing an actual system should account also for the statistical variations in these parameters. The attenuation coefficient of an installed optical fibre cable is wavelength-dependent.

Optical Fiber Communications

3 min read
Optical Fiber Communications: The optical fiber found its first large-scale application in telecommunications systems . Beginning with the first LED-based systems , the technology progressed rapidly to longer wavelengths and laser-based systems of repeater lengths over 30 km . 3 6 The first applications were primarily digital , since source nonlinearities precluded multichannel analog applications . Early links were designed for the 800- to 900-nm window of the optical fiber transmission spectrum , consistent with the emission wavelengths of the GaAs-AlGaAs materials system for semiconductor lasers and LEDs . The development of sources and detectors in the 1 . 3- to 1 . 55- m m wavelength range and the further improvement in optical fiber loss over those ranges has directed most applications to either the 1 . 3- m m window (for low dispersion) or the 1 . 55- m m window (for minimum loss) . The design of dispersion-shifted single-mode fiber along the availability of erbium-doped fiber amplifiers has solidified 1 . 55 m m as the wavelength of choice for high-speed communications . The largest currently emerging application for optical fibers is in the local area network (LAN) environment for computer data communications , and the local subscriber loop for telephone , video , and data services for homes and small businesses . Both of these applications place a premium on reliability , connectivity , and economy . While existing systems still use point-to-point optical links as building blocks , there is a considerable range of networking components on the market which allow splitting , tapping , and multiplexing of optical components without the need for optical detection and retransmission .

Types Of Fibers:multimode Optical Fibres

3 min read
Types of fibers:Multimode optical fibres: A 50/125 μm multimode graded index optical fibre cable: The characteristics of a multimode graded index optical fibre cable were specified in Recommendation ITU-T G.651, originally published in 1984 and deleted in 2008. Recommendation ITU-T G.651 covered the geometrical and transmissive properties of multimode fibres having a 50 μm nominal core diameter and a125 μm nominal cladding diameter. That Recommendation was developed during the infancy of optical fibre solutions for publicly switched networks. At that time (pre-1984), these multimode fibres were considered as the only practical solution for transmission distances in the tens of kilometres and bit rates of up to 40 Mbit/s. Single-mode fibres, which became available shortly after the publication of ITU-T G.651, have almost completely replaced multimode fibres in the publicly switched networks. Today, multimode fibres continue to be widely used in premises cabling applications such as Ethernet in lengths from 300 to 2 000 m, depending on bit rate. With a change in the applications, the multimode fibre definitions, requirements, and measurements evolved away from the original ITU-T G.651 and were moved to the modern ITU equivalent, Recommendation ITU-T G.651.1. Recommendation ITU-T G.651.1, Characteristics of a 50/125 μm multimode graded index optical fibre cable for the optical access network, provides specifications for a 50/125 μm multimode graded index optical fibre cable suitable to be used in the 850 nm region or in the 1 300 nm region or alternatively may be used in both wavelength regions simultaneously. This Recommendation contains the recommended values for both the fibre and cable attributes. The applications of this fibre are in specific environments of the optical access network. These environments are multi-tenant building sub-networks in which broadband services have to be delivered to individual apartments. This multimode fibre supports the cost-effective use of 1 Gbit/s Ethernet systems over link lengths up to 550 m, usually based upon the use of 850 nm transceivers. Quite a large percentage of all customers in the world are living in these buildings. Due to the high connection density and the short distribution cable lengths, cost-effective high capacity optical networks can be designed and installed by making use of 50/125 μm graded-index multimode fibres. The effective use of this network type has been shown by its extended and experienced use for datacom systems in enterprise buildings with system bit-rates ranging from 10 Mbit/s up to 10 Gbit/s. This use is supported by a large series of IEEE system standards and IEC fibre and cable standards, which are used as the main references in Recommendation ITU-T G.651.1.

Types Of Fibers:single-mode Optical Fibres

5 min read
Types of fibers:Single-mode optical fibres: The ITU-T first single-mode optical fibre and cable: The first single-mode optical fibre was specified in Recommendation ITU-T G.652, Characteristics of a single-mode optical fibre and cable, and for this reason, the ITU-T G.652 fibres are often called, “standard single-mode fibres”. These fibres were the first to be widely deployed in the public network and they represent a large majority of fibres that have been installed. The agreements that led to the first publication of Recommendation ITU-T G.652 formed a key foundation to the modern optical networks that are the basis of all modern telecommunications. Recommendation ITU-T G.652 describes the geometrical, mechanical, and transmission attributes of a single-mode optical fibre and cable which has zero-dispersion wavelength around 1 310 nm. This fibre was originally optimized for use in the 1 310 nm wavelength region, but can also be used in the 1 550 nm region. Recommendation ITU-T G.652 was first created in 1984; several revisions have been intended to maintain the continuing commercial success of this fibre in the evolving world of high-performance optical transmission systems. Over the years, parameters have been added to Recommendation ITU-T G.652 and the requirements have been made more stringent to meet the changes in market and technological demands, and in manufacturing capability. An example is the addition of a requirement for attenuation at 1 550 nm in 1988. In that year, the chromatic dispersion parameters and requirements were also defined. Some other examples include the addition of low water peak fibres (LWP) with negligible sensitivity to hydrogen exposure and the addition of requirements for PMD. However at the advent of these new capabilities and perceived needs, there was a consensus that some applications would need these attributes for advanced technologies, bit rates, and transmission distances; however, there were also applications that would not need these capabilities. Therefore, some options had to be maintained. For this reason, it was agreed to create different categories of ITU-T G.652 fibres. At the present time there are four categories, A, B, C, and D, which are distinguished on the PMDQ link design value specification and whether the fibre is LWP or not, i.e. water peak is specified (LWP) or it is not specified (WPNS), as shown in Table 1. Table 1 – ITU-T G.652 fibre categories