Wednesday 18 April 2012

LAB MANUALS

              Lab Manual for Measurement and Instrumentation Of MDU ROHTAK


ME- 318 F MEASUREMENTS & INSTRUMENTATION LAB.
   Sessional : 25 Marks
L T P                                                                           Practical : 25 Marks
- - 2                                                                             Total      : 50 Marks
Duration of Exam : 3 Hrs.

List of Experiments :

1. To Study various Temperature Measuring Instruments and to Estimate their Response times.
(a) Mercury – in glass thermometer
(b) Thermocouple
(c) Electrical resistance thermometer
(d) Bio-metallic strip

2. To study the working of Bourdon Pressure Gauge and to check the calibration of the gauge in a deadweight pressure gauge calibration set up.

3. To study a Linear Variable Differential Transformer (LVDT) and use it in a simple experimental set up to measure a small displacement.

4. To study the characteristics of a pneumatic displacement gauge.

5. To measure load (tensile/compressive) using load cell on a tutor.

6. To measure torque of a rotating shaft using torsion meter/strain gauge torque transducer.

7. To measure the speed of a motor shaft with the help of non-contact type pick-ups (magnetic or photoelectric).

8. To measure the stress & strain using strain gauges mounted on simply supported beam/cantilever beam.

9. To measure static/dynamic pressure of fluid in pipe/tube using pressure transducer/pressure cell.

10. To test experimental data for Normal Distribution using
Chi Square
test.

11. To learn the methodology of pictorial representation of experimental data and subsequent calculations for obtaining various measures of true value and the precision of measurement using Data acquisition system/ calculator.

12. Vibration measurement by Dual Trace Digital storage Oscilloscope.

13. To find out transmission losses by a given transmission line by applying
capacitive /inductive load.

14. Process Simulator.


Note:
1. At least ten experiments are to be performed in the Semester.
2. At least seven experiments should be performed from the above list. Remaining three experiments may either be performed from the above list or designed & set by the concerned institution as per the scope of the Syllabus.


Experiment No:1

Aim: To Study various Temperature Measuring Instruments and to Estimate their Response times.
(a) Mercury – in glass thermometer
(b) Thermocouple
(c) Electrical resistance thermometer
(d) Bi-metallic strip

Apparatus used: Mercury thermometer, Thermocouple setup, Platinum thermometer and Bi-metallic strip.

Theory:

(a) Mercury – in glass thermometer:

A liquid-in-glass thermometer is widely used due to its accuracy for the temperature range -200 to 600°C. Compared to other thermometers, it is simple and no other equipment beyond the human eye is required. The LIG thermometer is one of the earliest thermometers. It has been used in medicine, metrology and industry. In the LIG thermometer the thermally sensitive element is a liquid contained in a graduated glass envelope. The principle used to measure temperature is that of the apparent thermal expansion of the liquid. It is the difference between the volumetric reversible thermal expansion of the liquid and its glass container that makes it possible to measure temperature.
The liquid-in-glass thermometer comprises of
1. A bulb, a reservoir in which the working liquid can expand or contract in volume
2. A stem, a glass tube containing a tiny capillary connected to the bulb and enlarged at the bottom into a bulb that is partially filled with a working liquid. The tube's bore is extremely small - less than 0.02 inch (0.5 millimetre) in diameter
3. A temperature scale is fixed or engraved on the stem supporting the capillary tube to indicate the range and the value of the temperature. It is the case for the precision thermometers whereas for the low accurate thermometers such as industrial thermometer, the scale is printed on a separate card and then protected from the environment. The liquid-in-glass thermometers is usually calibrated against a standard thermometer and at the melting point of water
4. A reference point, a calibration point, the most common being the ice point
5. A working liquid, usually mercury or alcohol
6. An inert gas is used for mercury intended to high temperature. The thermometer is filled with an inert gas such as argon or nitrogen above the mercury to reduce its volatilization.
The response of the thermometer depends on the bulb volume, bulb thickness, total weight and type of thermometer. The sensitivity depends on the reversible thermal expansion of the liquid compared to the glass. The greater the fluid expansion, the more sensitive the thermometer. Mercury was the liquid the most often used because of its good reaction time, repeatability, linear coefficient of expansion and large temperature range. But it is poisonous and so other working liquids are used.
Fig: Liquid in Glass Thermometer
A mercury-in-glass thermometer, also known as a mercury thermometer, consisting of mercury in a glass tube. Calibrated marks on the tube allow the temperature to be read by the length of the mercury within the tube, which varies according to the heat given to it. To increase the sensitivity, there is usually a bulb of mercury at the end of the thermometer which contains most of the mercury; expansion and contraction of this volume of mercury is then amplified in the much narrower bore of the tube. The response time of the thermometer is nothing but as time constant or the time of consideration for measuring particular temperature.

(b) Thermocouple:

An electric current flows in a closed circuit of two dissimilar metals if their two junctions are at different temperatures. The thermoelectric voltage produced depends on the metals used and on the temperature relationship between the junctions. If the same temperature exists at the two junctions, the voltage produced at each junction cancel each other out and no current flows in the circuit. With different temperatures at each junction, different voltage is produced and current flows in the circuit. A thermocouple can therefore only measure temperature differences between the two junctions.
Fig: Thermocouple

Thermocouples response time is measured as a “time constant.” The time constant is defined as the time required for a thermocouple’s voltage to reach 63.2% of its final value in response to a sudden change in temperature. It takes five time constants for the voltage to approach 100% of the new temperature value. Thermocouples attached to a heavy mass will respond much slower than one that is left free standing because its value is governed by the temperature of the large mass.  A free standing (exposed or bare wire) thermocouple’s response time is a function of the wire size (or mass of the thermocouple bead) and the conducting medium.  A thermocouple of a given size will react much faster if the conducting medium is water compared to still air.

(c) Electrical resistance thermometer:
Resistance thermometers may be called as RTDs (resistance temperature detectors), PRT's (platinum resistance thermometers), or SPRT's (standard platinum resistance thermometers). These thermometers operate on the principle that, electrical resistance changes in pure metal elements, relative to temperature. The traditional sensing element of a resistance thermometer consists of a coil of small diameter wire wound to a precise resistance value. The most common material is platinum, although nickel, copper, and nickel-iron alloys compete with platinum in many applications.

Platinum Resistance thermometer consists of a fine platinum wire (platinum coil) wound in a non-inductive way on a mica frame M (Figure 1). The ends of this wire are soldered to points A and C from which two thick leads run along the length of the glass tube (that encloses the set up) and are connected to two terminals (P, P) fixed on the cap of the tube. These are the platinum wire leads. Also, by the side of these leads, another set of leads run parallel and are connected to the terminals (C, C) fixed on the cap of the tube. These are called compensating leads and are joined together inside the glass tube. The compensating leads and the platinum wire are separated from each other by mica or porcelain separators (D, D). The electrical resistance of the (P, P) leads is same as that of the (C, C) leads.
Fig: Resistance Thermometer

A time constant indicates the responsiveness of a resistance thermometer to temperature change. A common expression is the time it takes a thermometer to reflect 63.2% of a step temperature change in moving water. Response speed depends on the mass of the thermometer and the rate at which heat transfers from the outer surface to the sensing element. A rapid time constant reduces errors in a system subject to rapid temperature changes.

(d) Bi-metallic strip:

Bonding two metals with dissimilar thermal expansion coefficients can produce useful devices for detecting and measuring temperature changes. A typical pair is brass and steel with typical expansion coefficients of 19 and 13 parts per million per degree Celsius respectively.
Fig: Bimetallic Strip

The examples shown are straight strips, but bimetallic strips are made in coils to increase their sensitivity for use in thermostats. One of the many uses for bimetallic strips is in electrical breakers where excessive current through the strip heats it and bends it to trip the switch to interrupt the current.

A bimetallic strip is used to convert a temperature change into mechanical displacement. The strip consists of two strips of different metals which expand at different rates as they are heated, usually steel and copper, or in some cases brass instead of copper. The strips are joined together throughout their length by riveting, brazing or welding. The different expansions force the flat strip to bend one way if heated, and in the opposite direction if cooled below its initial temperature. The metal with the higher coefficient of thermal expansion is on the outer side of the curve when the strip is heated and on the inner side when cooled.

Conclusion: Hence the study of various temperature measuring instruments and their response times is completed.





Experiment No:2

Aim: To study the working of Bourdon Pressure Gauge and to check the calibration of the gauge in a deadweight pressure gauge calibration set up.

Apparatus used: Deadweight Pressure Gauge calibration set up

Theory:  These are used for measurement of pressure and vacuum and are suitable for all clean and non-clogging liquid and gaseous media. Bourdon gauge consists of a hollow metal tube with an oval cross section, bent in the shape of a hook. One end of the tube is closed, the other open and connected to the measurement region. If pressure (above local atmospheric pressure) is applied, the oval cross section will become circular, and at the same time the tube will straighten out slightly. The resulting motion of the closed end, proportional to the pressure, can then be measured via a pointer or needle connected to the end through a suitable linkage.



Fig: Bourdan Tube Gauge

Working of the Bourdon Pressure Gauge: In order to understand the working of the bourdon pressure gauge, we need to consider a cross-section of the Bourdon tube, as shown in the figure.

Fig: Working of Bourdon Gauge

Assume that a pressure P, which is greater than the atmospheric pressure, acts on at the pressure inlet of the gauge. According to the Pascal’s Law, the pressure is transmitted equally in all directions. Therefore,
Pressure acting on the Inner Wall = Pressure acting on the Outer Wall.
Now,
Area of Outer Wall projected to the pressure = 2πRod

Therefore,
Force on Outer wall = Fo = Pressure x Area = 2PπRod

Similarly,
Force on Inner Wall = Fi = 2PπRid

Since, Ro>Ri then, Fo>Fi.
So, the force that tries to unwind the tube is greater than the force that tries to bend it further. Therefore, the tube unwinds due to the extra pressure exerted on it. This unwinding is then recorded on a scale by using a series of gears and a pointer.


Calibration is the name of the term applied to checking the accuracy or the working condition of the concerned device. So, the calibration of Bourdon Pressure Gauge refers to the checking of its accuracy or reliability in taking a reading. The apparatus used for this purpose is called the Dead-Weight Gauge Tester.

Working of the Dead-Weight Gauge Tester: The working of this gauge tester can be understood easily with the help of the following diagram.

Fig: Dead-Weight Gauge Tester

In this figure gauge A and B are the ones to be calculated. We can at any stage disengage any gauge by closing the respective valve.
For the illustration purpose, we will just consider the calibration of Gauge A and assume that valve B remains closed.

Let
Weight of Plunger = W
Cross-sectional Area of the stem of Plunger = A

Therefore,
Pressure exerted on the fluid = P = W/A

Now, according to Pascal’s Law, pressure is transmitted equally in all direction. Therefore pressure encountered at the inlet of Gauge A is the same as P

Now,
if Pressure registered by Gauge A = PA = P
within experimental limits, then the gauge is working properly. If not, then there is some problem which must be detected and accounted for.

Procedure:

1. Fix the gauge to be tested on one end of the Dead-Weight Gauge tester and make sure that the valve is fully opened. Meanwhile close the other valve tightly so that no leakage of fluid is ensured.

2. Next, gently place the plunger in the tester ensuring that the plunger should not touch the edges of the bowl. Allow some time for the system to attain equilibrium, than take the reading from the gauge. Record both the applied and registered pressure in a table of values. Now, remove the plunger and once again after some time record the reading on the gauge. Record it in the table.

3. Now place some weights on the plunger so that the applied pressure is varied. Then, repeat the above mentioned procedure until there are at least six readings. Record them all in the table.

Observations & Calculations:

Sl No
Applied Pressure (P)
PA
Error
Neglecting Zero Error
Loading
Unloading
Mean
(PA-P)
1
0
9
9
9
9(Zero Error)
0
2
5
13
14
13.5
8.5
-0.5
3
10
18
18
18
8
-1

Conclusion: Hence the working of Bourdon Pressure Gauge and checking of calibration on a deadweight pressure gauge is completed.




Experiment No:3

Aim: To study a Linear Variable Differential Transformer (LVDT) and use it in a simple experimental set up to measure a small displacement.

Apparatus used: LVDT setup

Theory: The letters LVDT are an acronym for Linear Variable Differential Transformer, a common type of electromechanical transducer that can convert the rectilinear motion of an object to which it is coupled mechanically into a corresponding electrical signal. LVDT linear position sensors are readily available that can measure movements as small as a few millionths of an inch up to several inches, but are also capable of measuring positions up to ±20 inches (±0.5 m). The transformer's internal structure consists of a primary winding centered between a pair of identically wound secondary windings, symmetrically spaced about the primary. The coils are wound on a one-piece hollow form of thermally stable glass reinforced polymer, encapsulated against moisture, wrapped in a high permeability magnetic shield, and then secured in cylindrical stainless steel housing. This coil assembly is usually the stationary element of the position sensor.  The moving element of an LVDT is a separate tubular  armature of  magnet i cal l y  permeable material called the core, which is free to move axially within the coil's hollow bore, and mechanically coupled to the object whose position is being measured. This bore is typically large enough to provide substantial radial clearance between the core and bore, with no physical contact between it and the coil.
Fig: LVDT

The device consists of a primary coil, two secondary coils, and a moveable magnetic core which is connected to an external device whose position is of interest.  A sinusoidal excitation is applied to the primary coil, which couples with the secondary coils through the magnetic core (ie. voltages are induced in the secondary coils).  The position of the magnetic core determines the strength of coupling between the primary and each of the secondary cores, and the difference between the voltages generated across each of the secondary cores is proportional to the displacement of the core from the neutral position, or null point.
Fig: LVDT Principle

Procedure:
1. Adjust the experimental setup for probe to zero position. 
2. Verify all electrical connections.
3. Give the LVDT power supply on.
4. Record the displacement and output voltage.

Observations & Calculations:
Sl No
Displacement
Voltage
Error(V1-V2)


(V1)



(V2)


Conclusion:  Hence the measurement of a small displacement using LVDT is ______.




Experiment No:4

Aim: To study the characteristics of a pneumatic displacement gauge.

Apparatus used: Model of a pneumatic displacement gauge.

Theory:
In pneumatic type of devices, the displacement signal is converted to pressure signal. The device shown below is pneumatic displacement gauge and this is also known as flapper nozzle device.
Fig: Pneumatic Gauge

A pneumatic displacement gauge system operates with air. The signal is transmitted in form of variable air pressure (often in the range 3-15 psi, i.e. 0.2 to 1.0 bar) that initiates the control action. One of the basic building blocks of a pneumatic displacement gauge  system is the flapper nozzle amplifier. It converts very small displacement signal (in order of microns) to variation of air pressure. The basic construction of a flapper nozzle amplifier is shown in above figure. Constant air pressure (20psi) is supplied to one end of the pipeline. There is an orifice at this end. At the other end of the pipe there is a nozzle and a flapper. The gap between the nozzle and the flapper is set by the input signal. As the flapper moves closer to the nozzle, there will be less airflow through the nozzle and the air pressure inside the pipe will increase. On the other hand, if the flapper moves further away from the nozzle, the air pressure decreases. At the extreme, if the nozzle is open (flapper is far off), the output pressure will be equal to the atmospheric pressure. If the nozzle is blocks, the output pressure will be equal to the supply pressure. A pressure measuring device in the pipeline can effectively show the pressure variation. The characteristics is inverse and the pressure decreases with the increase in distance. Typical characteristics of a flapper nozzle amplifier is shown in below figure. The orifice and nozzle diameter are very small. Typical value of the orifice diameter is 0.01 inch (0.25 mm) and the nozzle diameter 0.025 inch (0.6 mm). Typical change in pressure is 1.0 psi (66 mbar) for a change in displacement of 0.0001 inch (2.5 micron). There is an approximate linear range in 3-15 psi, of the characteristics of the amplifier, which is the normal operating range.

The role of flapper nozzle lies in its ability to generate a large output air pressure, by placing a small obstruction at the orifice (at the nozzle) of an incoming pneumatic signal. This trainer has a flapper nozzle, together with a pressure amplifier, suitably connected to a spring damper, and a spring compensator. This trainer not only used to draw the characteristics of a FLAPPER NOZZLE, but also highlights the application of a FLAPPER NOZZLE itself.
The Flapper Nozzle trainer is a pneumatic system. The air at fixed pressure enters a constriction (a partial obstruction) in its delivery path and enters a nozzle. The opening of the nozzle is larger than the constriction. When the flapper is moved away (usually one thousandth of an inch) from the nozzle, the pressure at the nozzle falls to a low value typically 2 to 3 psi. When the flapper is moved close to the nozzle, the pressure at he nozzle rises to the supply pressure. This pressure is now applied to a pressure amplifier, which in turn moves a beam. The purpose of this beam is to demonstrate the utility of a flapper nozzle experiment. The displacement of this moving beam is proportional to the pressure developed due to the positioning of the flapper from the nozzle.

Fig: Flapper Nozzle System

Conclusion: Hence the characteristics of a pneumatic displacement gauge are studied.





Experiment No:5

Aim: To measure load (tensile/compressive) using load cell on a tutor.

Apparatus used: Load cell on a tutor.

Theory: A Load Cell is defined as a transducer that converts an input mechanical force into an electrical output signal. Load Cells are also commonly known as Load Transducers or Load Sensors.

Load cell designs can be distinguished according to the type of output signal generated (pneumatic, hydraulic, electric) or according to the way they detect weight (bending, shear, compression, tension, etc.)

  Hydraulic load cells are force -balance devices, measuring weight as a change in pressure of the internal filling fluid. In a rolling diaphragm type hydraulic load cell, a load or force acting on a loading head is transferred to a piston that in turn compresses a filling fluid confined within an elastomeric diaphragm chamber. As force increases, the pressure of the hydraulic fluid rises. This pressure can be locally indicated or transmitted for remote indication or control. Output is linear and relatively unaffected by the amount of the filling fluid or by its temperature. If the load cells have been properly installed and calibrated, accuracy can be within 0.25% full scale or better, acceptable for most process weighing applications. Because this sensor has no electric components, it is ideal for use in hazardous areas. Typical hydraulic load cell applications include tank, bin, and hopper weighing. For maximum accuracy, the weight of the tank should be obtained by locating one load cell at each point of support and summing their outputs.

Pneumatic load cells also operate on the force-balance principle. These devices use multiple dampener chambers to provide higher accuracy than can a hydraulic device. In some designs, the first dampener chamber is used as a tare weight chamber. Pneumatic load cells are often used to measure relatively small weights in industries where cleanliness and safety are of prime concern. The advantages of this type of load cell include their being inherently explosion proof and insensitive to temperature variations. Additionally, they contain no fluids that might contaminate the process if the diaphragm ruptures. Disadvantages include relatively slow speed of response and the need for clean, dry, regulated air or nitrogen.

 Strain-gage load cells convert the load acting on them into electrical signals. The gauges themselves are bonded onto a beam or structural member that deforms when weight is applied. In most cases, four strain gages are used to obtain maximum sensitivity and temperature compensation. Two of the gauges are usually in tension, and two in compression, and are wired with compensation. When weight is applied, the strain changes the electrical resistance of the gauges in proportion to the load. Other load cells are fading into obscurity, as strain gage load cells continue to increase their accuracy and lower their unit costs.

The following figure is used for compression and tension load measuring on load cell.

Fig: Load Cell

Procedure:

1. Make setup of load cell and tutor.
2. Place weight on the load cell.
3. Note down the reading given by tutor separately for compression and tension.
4. Take 8-10 readings by increasing weight.
5. Compare actual weight & weight given by tutor.

Conclusion: Actual tensile & compression loads are _______ & _________.
Tutor tensile & compression loads are _______ & _________.


Experiment No:6

Aim: To measure torque of a rotating shaft using torsion meter/strain gauge torque transducer.

Apparatus used: Torsion meter/strain gauge torque transducer.

Theory:

What is torque?
Torque is the tendency of a force to rotate an object about an axis, fulcrum, or pivot. (or) Torque is defined as a force around a given point, applied at a radius from that point.
An engine produces power by providing a rotating shaft which can exert a given amount of torque on a load at a given rpm. The amount of torque the engine can exert usually varies with rpm.

Facts about calculations:
  1. Power (the rate of doing work) is dependent on torque and rpm.
  2. Torque and rpm are the measured quantities of engine output.
  3. Power is calculated from torque and rpm, by the following equation: P = Torque x RPM

How to measure torque of a rotating shaft?
The power transmitted can be calculated from the torque, using the equation
P = ω T
Where,
P is the power (in watts),
T is torque (N m)
ω is angular speed (rad / s).


What is torsion meter?
The deflection measuring system is called torsion meter. An instrument for determining the torque on a shaft, and hence the horse power of an engine by measuring the amount of twist of a given length of the shaft. When a shaft is connected between a driving engine and driven load, a twist (angular displacement) occurs on the shaft between its ends. This angle of twist is measured and calibrated in terms of torque.
Construction of mechanical torsion meter: The main parts of the mechanical torsion meter are as follows: A shaft which has two drums and two flanges mounted on its ends as shown in the diagram. One drum carries a pointer and other drum has a torque calibrated scale. A stroboscope is used to take readings on a rotating shaft.
Operation of mechanical torsion meter: One end of the shaft of the torsion meter is connected to the driving engine and its other end to the driven load. An angle of twist is experienced by the shaft along its length between the two flanges which is proportional to the torque applied to the shaft. A measure of this angle of twist becomes a measure of torque when calibrated. The angular twist caused is observed on the torque calibrated scale corresponding to the position of the pointer. As the scale on the drum is rotating, reading cannot be taken directly. Hence a stroboscope is used. The stroboscope’s flashing light is made to fall on the scale and the flashing frequency is adjusted till a stationary image is obtained. Then the scale reading is noted.


What is strain gauge torque transducer?
The strain monitoring system is called torque meter (or) strain gauge torque transducer.
A Torque sensor is a transducer that converts a torsional mechanical input into an electrical output signal. Torque Sensor, are also commonly known as a Torque Transducer.
Torque is measured by either sensing the actual shaft deflection caused by a twisting force, or by detecting the effects of this deflection. The surface of a shaft under torque will experience compression and tension, as shown in figure below.

Fig: Strain Gauge Torque Transducer

To measure torque, strain gage elements usually are mounted in pairs on the shaft, one gauge measuring the increase in length (in the direction in which the surface is under tension), the other measuring the decrease in length in the other direction.
A strain gage can be installed directly on a shaft. Because the shaft is rotating, the torque sensor can be connected to its power source and signal conditioning electronics via a slip ring. The strain gage also can be connected via a transformer, eliminating the need for high maintenance slip rings. The excitation voltage for the strain gage is inductively coupled, and the strain gage output is converted to a modulated pulse frequency as shown in figure. Maximum speed of such an arrangement is 15,000 rpm.

Fig: Strain Gauge Working

Conclusion: Hence the torque of a rotating shaft is _______.




Experiment No:7

Aim: To measure the speed of a motor shaft with the help of non-contact type pick-ups (magnetic or photoelectric).

Apparatus used: Optical pick up

Theory: Besides specific measurement requirements, application conditions determine the choice of the appropriate sensor technology. Because of their ability to withstand harsh environments and abrasive conditions, non-contact magnetic sensors should be used for the most critical functions inside the engine compartment. For rotational speed and position detection, to compensate for position tolerances and position drifts of the mechanical connection without degraded performance, magnetic sensors with a large control tolerance field are used.
To control the speed of a prime mover, speed controls compare actual speed to desired, or set, speed. The speed sensor most often used to detect prime mover speed is the magnetic pickup (MPU). When a magnetic material (usually a gear tooth driven by the prime mover) passes through the magnetic field at the end of the magnetic pickup, a voltage is developed. The frequency of this voltage is translated by the speed control into a signal which accurately depicts the speed of the prime mover. The gap between the end of the MPU and the gear tooth is set at 0.25 to 1.02 mm (0.010 to 0.040 inch) at the closest point. The MPU will be damaged if it touches the moving gear. A properly installed MPU will provide as much as 50 Vac (rms); most Woodward controls require a minimum of 1.5 Vac at the lowest speed. Voltage decreases as the MPU is moved farther from the gear. If the gap between the pickup and the gear cannot be measured directly, it can be determined by counting the number of turns the pickup is backed away from the gear. One full turn counterclockwise will move the MPU out 0.0555 inch (1.5 mm for the metric model).


Procedure:
There are electric tachometer consists of a transducer which converts rotational speed into an electrical signal coupled to an indicator. The transducer produces an electrical signal in proportion to speed. The signal may be in the analog form or in the form of pulses. Tachometer or pickups of this type produce pulses form a rotating shaft without being mechanically connected to it. As the energy produced by these devices is not sufficient to actual an indicator directly, amplifiers of sufficient sensitivity are employed. The various types of non-contact pick-ups are optical pick ups or photoelectric or photoconductive cell.
•           Electromagnetic pick up
•           Capacitive pick up

Here we will measure the speed by optical pick up. As they don’t have moving parts so speed up to 3 million rpm. These are available in a variety of designs using the principle of shaft rotation to interrupt a beam of light falling on a photoelectric or photo conductive cell. The pulse thus obtained are first amplified & then either fed to an electric counter, or shaped to an along signal and connected to the indicator. A bright white spot is made on the rotating shaft. A beam of light originating from the tachometer case hits the white spot & the reflected light falls on photoconductive cell inside the case, producing pulse in transes torised amplifier, which is turn,  causes the indicator to deflect which is measure of speed of the shaft.

Observations & Calculations:

Formula used: - Speed (rpm) = Frequency x Diameter of Disk / No. of segments.
Now,
1.  Connect the CKT & CRO with the required apparatus & switch on the supply.
2.  Adjust the speed of the motor by the knob and wait for some time till the motor attains the maximum speed at corresponding knob position.
3.  Measure the frequency from out put wave on CRO.
4.  Find the speed of the motor.

Calculations: -   At knob position (A)
RPM    =          (frequency) x diameter of disc/No. of teeth of segments
N         =          RPM = f x d / T Where f = 1/t
Where
t = time period of one cycle of out put wave &
f = 1.8 x 2ms = 3.6 x 10-3 s [on CRO] and
d = 56.5mm.

Therefore, R.P.M = 2.79 x 102 x 56.5/ 60 = 262 rpm

Conclusion: Hence the Speed of position ‘A’ = 262 rpm





Experiment No:8

Aim: To measure the stress & strain using strain gauges mounted on simply supported beam/cantilever beam.

Apparatus used: Strain gauge Kit, cantilever beam weights, multimeter.

Theory:

When external forces are applied to a stationary object, stress and strain are the result. Stress is defined as the object's internal resisting forces, and strain is defined as the displacement and deformation that occur. For a uniform distribution of internal resisting forces, stress can be calculated by dividing the force (F) applied by the unit area (A). Strain is defined as the amount of deformation per unit length of an object when a load is applied. Strain is calculated by dividing the total deformation of the original length by the original length (L).

Fig: Stress - Strain Concept

Fundamentally, all strain gauges are designed to convert mechanical motion into an electronic signal. A change in capacitance, inductance, or resistance is proportional to the strain experienced by the sensor. If a wire is held under tension, it gets slightly longer and its cross-sectional area is reduced. This changes its resistance (R) in proportion to the strain sensitivity (S) of the wire's resistance. When a strain is introduced, the strain sensitivity, which is also called the gauge factor (GF), is given by:
GF= (∆R/R)/(∆L/L)

There are many types of strain gauges. Among them, a universal strain gauge has a structure such that a grid-shaped sensing element of thin metallic resistive foil (3 to 6µm thick) is put on a base of thin plastic film (15 to 16µm thick) and is laminated with a thin film.

Fig: Strain Gauge

The strain gauge is tightly bonded to a measuring object so that the sensing element (metallic resistive foil) may elongate or contract according to the strain borne by the measuring object. When bearing mechanical elongation or contraction, most metals undergo a change in electric resistance. The strain gauge applies this principle to strain measurement through the resistance change. Generally, the sensing element of the strain gauge is made of a copper-nickel alloy foil. The alloy foil has a rate of resistance change proportional to strain with a certain constant.

Procedure:

1.  Arrange the cantilever beam, ammeter and voltmeter as shown in figure.
2.  After this, put the weight on the rod of cantilever beam.
3.  Measure the digital display reading for a particular weight.
4.  Measure the value of ammeter (along) and voltmeter reading (micro-volt)
5.  Increase the strength of weight.
6.  Repeat the steps for increased weight.
7.  Measure all dimensions of scale of cantilever.

Observations & Calculations:

Stress=F/A=Wg/A
Strain=∆L/L
GF= (∆R/R)/(∆L/L)

Depending upon the beam used in apparatus force stress and strain values varies accordingly with simply supported or cantilever beam terminology.

Conclusion: Hence stress=________  & strain=________.





Experiment No:9

Aim: To measure static/dynamic pressure of fluid in pipe/tube using pressure transducer/pressure cell.

Apparatus used: Pressure transducer Kit, multimeter etc.

Theory:

Pressure is defined as force per unit area that a fluid exerts on its surroundings. A pressure measurement can be described as either static or dynamic. The pressure in cases where no motion is occurring is referred to as static pressure. Examples of static pressure include the pressure of the air inside a balloon or water inside a basin. Often times, the motion of a fluid changes the force applied to its surroundings. Such a pressure measurement is known as dynamic pressure measurement. For example, the pressure inside a balloon or at the bottom of a water basin would change as air is let out of the balloon or as water is poured out of the basin.

Because of the great variety of conditions, ranges, and materials for which pressure must be measured, there are many different types of pressure sensor designs. Often pressure can be converted to some intermediate form, such as displacement. The sensor then converts this displacement into an electrical output such as voltage or current. The three most universal types of pressure transducers of this form are the strain gage, variable capacitance, and piezoelectric.

Fig: Pressure Transducer

Procedure:

1.  Firstly arrange the pressure transducer, Multimeter, Voltmeter.
2.  After that increase the pressure in the pressure transducer.
3.  Set the readings of pressure transducer on a particular reading.
4.  Now note the display reading on Kit.
5.  Also note the voltmeter & ammeter readings.
6.  Repeat the numbers of reading with different pressure on transducer.
7.  Compare the value of pressure applied on transducer & display readings.

Observations & Calculations:

Theoretically, P=ρg∆H
Where,
ρ=density of water in pipe
g=acceleration due to gravity
∆H=change in head

Conclusion: Hence the pressure of the fluid in pipe is _______.