450 likes | 1.55k Views
Session Objectives At the end of this session you will be able to:. Recap Voice Quality ModuleDiscuss the Monitoring Server reporting options for Lync Server 2010Discuss Troubleshooting options and tools available for Lync Server 2010. 2. Agenda. Recap Voice QualityMonitoring ServerQuality Monit
E N D
1. Microsoft Lync Server 2010 Monitoring and ArchivingModule 18 Slide Objective:
Notes:
Slide Objective:
Notes:
2. Session ObjectivesAt the end of this session you will be able to: Recap Voice Quality Module
Discuss the Monitoring Server reporting options for Lync Server 2010
Discuss Troubleshooting options and tools available for Lync Server 2010 2 Slide Objective: Present objectives of the module
Notes:
Within the industry, you are all familiar with Quality of Service. In this session you will come to know and understand a new, state of the art approach called the Quality of Experience. As a result of being involved in this session you will be able to:
Explain what the traditional approaches to IP Telephony voice quality (Quality of Service and Network Service Quality) are and how they differ from the new approach (Quality of Experience)
Explain the Quality of Experience approach to your peers
Install and configure the Monitoring server
Review reporting options for the Monitoring Server
Evaluate Monitoring Server reports and determine what constitutes acceptable performance based on MOS results
Slide Objective: Present objectives of the module
Notes:
Within the industry, you are all familiar with Quality of Service. In this session you will come to know and understand a new, state of the art approach called the Quality of Experience. As a result of being involved in this session you will be able to:
Explain what the traditional approaches to IP Telephony voice quality (Quality of Service and Network Service Quality) are and how they differ from the new approach (Quality of Experience)
Explain the Quality of Experience approach to your peers
Install and configure the Monitoring server
Review reporting options for the Monitoring Server
Evaluate Monitoring Server reports and determine what constitutes acceptable performance based on MOS results
3. Agenda Recap Voice Quality
Monitoring Server
Quality Monitoring and Reporting
Troubleshooting 3 Slide Objective: To provide an overview of the topics covered: Quality of Service and the Monitoring Server
Notes:
This presentation provides an overview of Quality of Service and the Monitoring Server. It covers:
Explaining the Monitoring Server as a new approach to Quality of Service
Whats involved in configuring the Monitoring server
The reporting options offered once the server is configuredSlide Objective: To provide an overview of the topics covered: Quality of Service and the Monitoring Server
Notes:
This presentation provides an overview of Quality of Service and the Monitoring Server. It covers:
Explaining the Monitoring Server as a new approach to Quality of Service
Whats involved in configuring the Monitoring server
The reporting options offered once the server is configured
4. Voice Quality Recap Understand the important factors that determine end user Voice Quality experiences
Utilize Lync Server 2010 support for network layer voice traffic management
Recognize the Lync Server 2010 improvements for Voice Quality performance and end user experiences
Know how to monitor and diagnose Voice Quality issues 4 Slide Objective: Recap Module 13 Voice Quality considerations
Notes:
Understand the important factors
What defines voice quality? reliability, quality.
What is good quality? good means good enough! (i.e. something users dont notice)
Quality issues devices, performance, gateways, network
Utilize CAC
How much bandwidth?
Network QoS
VLAN support
End user experience
Media bypass improves audio quality
Calls stay established
Voice Resiliency
Higher quality audio
New devices
Client feedback
Monitor and diagnose issues
Diagnostics
Synthetic transactions
Monitoring server!Slide Objective: Recap Module 13 Voice Quality considerations
Notes:
Understand the important factors
What defines voice quality? reliability, quality.
What is good quality? good means good enough! (i.e. something users dont notice)
Quality issues devices, performance, gateways, network
Utilize CAC
How much bandwidth?
Network QoS
VLAN support
End user experience
Media bypass improves audio quality
Calls stay established
Voice Resiliency
Higher quality audio
New devices
Client feedback
Monitor and diagnose issues
Diagnostics
Synthetic transactions
Monitoring server!
5. Monitoring Server Architecture for Lync Server 2010
Monitoring Server captures both call detail record (CDR) and Monitoring Server data, which incorporates file transfer, application sharing, and remote assistance
Instant Messaging archiving is solely covered in Archiving Server 5 Slide Objective: Summarize the approach of Monitoring
Notes:
In OCS 2007, Archiving & CDR was used to
Archive the actual content of IM conversations and group conferences
Capture usage information for CDRs (this includes: file transfer, a/v conversation, application sharing, remote assistance)
in Lync Server 2010 (and OCS 2007 R2), monitoring server captures all media except instant messaging (this is covered in the new Archiving server)
Monitoring Server collects two types of data:
Quality of Experience (QoE) data that includes numerical data indicating the quality of calls on your network, and information about participants, device names, drivers, IP addresses, and endpoint types involved in calls and sessions. These quality metrics are collected at the end of every VoIP call and every video call from the participant endpoints, including IP phones, Lync 2010, the Microsoft Office Live Meeting client, and A/V Conferencing Servers and Mediation Servers. For Mediation Servers, metrics are gathered from both the leg between Mediation Server and UC endpoints, and the leg between Mediation Server and the media gateway.
Call Detail Records (CDRs), which capture usage information related to VoIP calls, IM messages, A/V conversations, meetings, file transfers, application sharing, and remote assistance. CDR data is captured for both peer-to-peer and multiparty conferences. Note that the content of IM messages is not captured in CDR data; to preserve IM content for compliance reasons, use the Archiving Server feature.Slide Objective: Summarize the approach of Monitoring
Notes:
In OCS 2007, Archiving & CDR was used to
Archive the actual content of IM conversations and group conferences
Capture usage information for CDRs (this includes: file transfer, a/v conversation, application sharing, remote assistance)
in Lync Server 2010 (and OCS 2007 R2), monitoring server captures all media except instant messaging (this is covered in the new Archiving server)
Monitoring Server collects two types of data:
Quality of Experience (QoE) data that includes numerical data indicating the quality of calls on your network, and information about participants, device names, drivers, IP addresses, and endpoint types involved in calls and sessions. These quality metrics are collected at the end of every VoIP call and every video call from the participant endpoints, including IP phones, Lync 2010, the Microsoft Office Live Meeting client, and A/V Conferencing Servers and Mediation Servers. For Mediation Servers, metrics are gathered from both the leg between Mediation Server and UC endpoints, and the leg between Mediation Server and the media gateway.
Call Detail Records (CDRs), which capture usage information related to VoIP calls, IM messages, A/V conversations, meetings, file transfers, application sharing, and remote assistance. CDR data is captured for both peer-to-peer and multiparty conferences. Note that the content of IM messages is not captured in CDR data; to preserve IM content for compliance reasons, use the Archiving Server feature.
6. Monitoring Server 6 Slide Objective: Identify key features regarding metrics
Notes:
The Monitoring Server provides these key features. See bullet items for detail features and descriptions.
Benefits of Using Monitoring Server include:
Identify and isolate problems in your Lync Server 2010 deployment.
Understand overall and per-user usage of Lync Server 2010.
Receive alerts that can notify you of server and network problems that impact the quality of audio and video (using Microsoft Systems Center Operations Manager (SCOM) and the R2 SCOM Management Pack).
Perform troubleshooting in response to end-user complaints about your deployments reliability or media quality.
Examine the media quality of real sessions to assess current deployment quality and prepare for larger rollouts.
Gather session usage statistics for return-on-investment calculations, and view trends to plan for post-deployment growth.
Use several built-in standard reports that are ready as soon as Monitoring Server is running.
Slide Objective: Identify key features regarding metrics
Notes:
The Monitoring Server provides these key features. See bullet items for detail features and descriptions.
Benefits of Using Monitoring Server include:
Identify and isolate problems in your Lync Server 2010 deployment.
Understand overall and per-user usage of Lync Server 2010.
Receive alerts that can notify you of server and network problems that impact the quality of audio and video (using Microsoft Systems Center Operations Manager (SCOM) and the R2 SCOM Management Pack).
Perform troubleshooting in response to end-user complaints about your deployments reliability or media quality.
Examine the media quality of real sessions to assess current deployment quality and prepare for larger rollouts.
Gather session usage statistics for return-on-investment calculations, and view trends to plan for post-deployment growth.
Use several built-in standard reports that are ready as soon as Monitoring Server is running.
7. Monitoring Scenarios (1 of 2) Users
Tier 1: Help desk
Tier 2: NOC
Tier 3: Network Engineering
Near Real Time Monitoring
Alerts when VoIP quality degrades
Health Model provides an overall view
View a users quality history at any time
Proactively identify worst performing endpoints
Microsoft Systems Center Operations Manager (SCOM) Pack 7 Slide Objective: Identify Monitoring Server monitoring scenarios
Notes:
There are many scenarios that demand a quantified understanding of the media quality that users are experiencing in a Lync Server 2010 deployment. If you cannot measure the service quality it is not possible to manage the service. Consequently, choosing to include Monitoring server in a Lync Server 2010 solution is a prudent practice.
With the Monitoring Server in your deployment, you can do the following:
Gather statistics on media quality at individual locations or based on a grouping of subnets.
Proactively monitor and troubleshoot media quality of experience issues.
Perform diagnostics in response to VoIP user complaints.
View trends that can help you with post-deployment growth and measure results against the service level agreement.
Slide Objective: Identify Monitoring Server monitoring scenarios
Notes:
There are many scenarios that demand a quantified understanding of the media quality that users are experiencing in a Lync Server 2010 deployment. If you cannot measure the service quality it is not possible to manage the service. Consequently, choosing to include Monitoring server in a Lync Server 2010 solution is a prudent practice.
With the Monitoring Server in your deployment, you can do the following:
Gather statistics on media quality at individual locations or based on a grouping of subnets.
Proactively monitor and troubleshoot media quality of experience issues.
Perform diagnostics in response to VoIP user complaints.
View trends that can help you with post-deployment growth and measure results against the service level agreement.
8. Monitoring Scenarios (2 of 2) Quality of Experience Validation
Verify media quality at each stage of your rollout
Use Mean Opinion Score (MOS) to set up Monitoring Server-based Service Level Agreements (SLAs)
Identify the Monitoring Server levels by location
VoIP Planning
Right provision your network for the Monitoring Server
Watch out for Monitoring Server hotspots
Identify Monitoring Server trends 8 Slide Objective: Identify Monitoring Server monitoring scenarios
Notes:
There are many scenarios that demand a quantified understanding of the media quality that users are experiencing in a Lync Server 2010 deployment. If you cannot measure the service quality it is not possible to manage the service. Consequently, choosing to include Monitoring server in an Lync Server 2010 solution is a prudent practice.
With the Monitoring Server Monitoring Server in your deployment, you can do the following:
Gather statistics on media quality at individual locations or based on a grouping of subnets.
Proactively monitor and troubleshoot media quality of experience issues.
Perform diagnostics in response to VoIP user complaints.
View trends that can help you with post-deployment growth and measure results against the service level agreement.
Slide Objective: Identify Monitoring Server monitoring scenarios
Notes:
There are many scenarios that demand a quantified understanding of the media quality that users are experiencing in a Lync Server 2010 deployment. If you cannot measure the service quality it is not possible to manage the service. Consequently, choosing to include Monitoring server in an Lync Server 2010 solution is a prudent practice.
With the Monitoring Server Monitoring Server in your deployment, you can do the following:
Gather statistics on media quality at individual locations or based on a grouping of subnets.
Proactively monitor and troubleshoot media quality of experience issues.
Perform diagnostics in response to VoIP user complaints.
View trends that can help you with post-deployment growth and measure results against the service level agreement.
9. Monitoring Server Endpoint Reporting 9 Slide Objective: Show how the server pulls information for reports
Notes:
This drawing shows that the Monitoring server accepts CDR information from every type of call scenario and all supported End Points including (AVMCU, Mediation Server, LMC, Lync 2010 Phone Edition, OC 2007).
The Microsoft OCS 2007 Quality of Experience Server Audio and Video Metric Processing Guide, Published Oct 2007, states:
At the end of each call, the Unified Communications (UC) endpoints send an A/V (audio/video) metric report to the Monitoring Server. (Illustration adapted from same guide.)
(still same for Lync Server 2010)Slide Objective: Show how the server pulls information for reports
Notes:
This drawing shows that the Monitoring server accepts CDR information from every type of call scenario and all supported End Points including (AVMCU, Mediation Server, LMC, Lync 2010 Phone Edition, OC 2007).
The Microsoft OCS 2007 Quality of Experience Server Audio and Video Metric Processing Guide, Published Oct 2007, states:
At the end of each call, the Unified Communications (UC) endpoints send an A/V (audio/video) metric report to the Monitoring Server. (Illustration adapted from same guide.)
(still same for Lync Server 2010)
10. Key Metrics per Call 10 Slide Objective: Identify key metrics per call
Notes:
The arrows indicates features which were introduced with Lync Server 2010
Some of the quality metrics collected for each call are listed on this slide. Network conditions have already been discussed early. The details of the Mean Opinion Score (MOS) measurements are coming up.
The metrics CDR records more commonly refer to as A/V Metric Reports get generated by the endpoints that terminate media (Mediation Server, AVMCU, OC, Tanjay) and have support for the metrics. The A/V metric report itself, in XML format, is routed to the QMS DB via a SIP SERVICE request.
Overall, about 35 different metrics are collected.
The [MS-QoE]: Quality of Experience Monitoring Server Protocol Specification is a Microsoft proprietary protocol used for publishing audio and video Quality of Experience (QoE) metrics. A client calculates QoE metrics and then sends them to the QoE Monitoring Server for monitoring and diagnostics purposes.
This specification details the protocol as well as the metrics collected. As well, the appendix publishes the MS-RTCP-Metrics schema for OCS RTM and OCS R2
In terms of MOS, for all supported endpoints (EPs) (A/V MCU, Mediation Server, LMC, Tanjay, OC) data collected:
Network MOS a prediction of the wideband Listening Quality MOS of audio played to the user taking into consideration only network factors.
ForOC 2007 clients, we additionally collect :
Listening MOS a prediction of the wideband Listening Quality MOS of the audio stream that is played.
Sending MOS a prediction of the wideband Listening Quality MOS of the audio stream that is being sent from the user.
Conversational MOS a prediction of the narrowband Conversational Quality MOS of the audio stream that is played to the user. It represents how a large group of people would rate the quality of the connection for holding a conversation.
Slide Objective: Identify key metrics per call
Notes:
The arrows indicates features which were introduced with Lync Server 2010
Some of the quality metrics collected for each call are listed on this slide. Network conditions have already been discussed early. The details of the Mean Opinion Score (MOS) measurements are coming up.
The metrics CDR records more commonly refer to as A/V Metric Reports get generated by the endpoints that terminate media (Mediation Server, AVMCU, OC, Tanjay) and have support for the metrics. The A/V metric report itself, in XML format, is routed to the QMS DB via a SIP SERVICE request.
Overall, about 35 different metrics are collected.
The [MS-QoE]: Quality of Experience Monitoring Server Protocol Specification is a Microsoft proprietary protocol used for publishing audio and video Quality of Experience (QoE) metrics. A client calculates QoE metrics and then sends them to the QoE Monitoring Server for monitoring and diagnostics purposes.
This specification details the protocol as well as the metrics collected. As well, the appendix publishes the MS-RTCP-Metrics schema for OCS RTM and OCS R2
In terms of MOS, for all supported endpoints (EPs) (A/V MCU, Mediation Server, LMC, Tanjay, OC) data collected:
Network MOS a prediction of the wideband Listening Quality MOS of audio played to the user taking into consideration only network factors.
ForOC 2007 clients, we additionally collect :
Listening MOS a prediction of the wideband Listening Quality MOS of the audio stream that is played.
Sending MOS a prediction of the wideband Listening Quality MOS of the audio stream that is being sent from the user.
Conversational MOS a prediction of the narrowband Conversational Quality MOS of the audio stream that is played to the user. It represents how a large group of people would rate the quality of the connection for holding a conversation.
11. Voice Quality MeasurementAbsolute Categorization Rating Traditional Assessment of Voice Quality
Subjective test of Voice quality based on a scale of 1-5.
Scoring is done by group of testers listening to VoIP calls and providing their opinion of voice quality.
MOS is the average Absolute Categorization Rating (ACR). 11 Slide Objective: Discuss subjective testing
Notes:
The basis of all measures of voice quality is subjective testing because how a person perceives the quality of speech is inherently subjective and effected by human perception. There are several different methodologies for subjective testing. For most voice quality measures an Absolute Categorization Rating (ACR) scale test is used. In addition, you need to monitor at least 30 calls to get an accurate ACR test.
In an ACR subjective test, a statistically significant sized group of people rate their experience on a scale from 1-5, with the following scale:
Excellent 5
Good 4
Fair 3
Poor 2
Bad 1
The average of the scores is called a Mean Opinion Score (MOS).
In an ACR subjective test, the resulting MOS value from a test is relative to the range of experience exposed to the group and to the type of experience being rated. This means that MOS values between tests cannot be compared unless these conditions are the same.
Because it is impractical to conduct subjective tests of voice quality for a live communication system, the UC solution generates ACR MOS values by using advanced algorithms to objectively predict the results of a subjective test.
Above notes are excerpts from: Microsoft OCS 2007 Quality of Experience Monitoring Server GuideSlide Objective: Discuss subjective testing
Notes:
The basis of all measures of voice quality is subjective testing because how a person perceives the quality of speech is inherently subjective and effected by human perception. There are several different methodologies for subjective testing. For most voice quality measures an Absolute Categorization Rating (ACR) scale test is used. In addition, you need to monitor at least 30 calls to get an accurate ACR test.
In an ACR subjective test, a statistically significant sized group of people rate their experience on a scale from 1-5, with the following scale:
Excellent 5
Good 4
Fair 3
Poor 2
Bad 1
The average of the scores is called a Mean Opinion Score (MOS).
In an ACR subjective test, the resulting MOS value from a test is relative to the range of experience exposed to the group and to the type of experience being rated. This means that MOS values between tests cannot be compared unless these conditions are the same.
Because it is impractical to conduct subjective tests of voice quality for a live communication system, the UC solution generates ACR MOS values by using advanced algorithms to objectively predict the results of a subjective test.
Above notes are excerpts from: Microsoft OCS 2007 Quality of Experience Monitoring Server Guide
12. Voice Quality Test Options Subjective:
Uses panel of testers to determine VoIP quality
Results vary from one test to another
Active: Inject reference signal into stream and compare it to the output at other end
Objective:
Monitoring Server approach
Perceptual Evaluation of Speech Quality (PESQ): ITU standard uses a similar approach
Passive: Output signal is compared to a model to predict perceived quality
12 Slide Objective: Discuss voice quality test options as it relates to the Monitoring Server reporting
Notes:
This section covers Mean Opinion Score (MOS) methodology for evaluating VoIP quality, types of reports available by default with the Monitoring server and sample reports for comparison. The sample reports are for reference only; reports from a production environment should be created based on a baseline established during preliminary testing. The baseline reports can then be used to track changes in the environment to see if VoIP quality is improving or degrading.
Actual experimental measurement of voice quality in traditional telephony and IP telephony networks is very complex and typically only done to identify an already suspected issue. The most common form of experimental measurement is active measurement, which injects a known reference signal and compares it to the degraded signal to predict perceived quality, using algorithms such as the ones previously described. That approach generally requires you to set up and run tests in parallel from the actual use of the system, by inserting measuring hardware into the network and sending actual test data across the network. Most common algorithms such as PESQ require active measurement.
Passive measurement is a newer, more complex, and less commonly used technique. Its advantage is that it allows in-place assessment of actual quality of the live traffic. In passive measurement, no reference signal is used; instead, the degraded signal is inspected as received and a model is used to predict perceived quality.
Most current passive measurement models only consider the transport layer effects (loss, jitter, and delay for the session) to estimate a MOS. Looking at the transport layer can provide a vague understanding of the quality of the call, but it does not take into account other important aspects that can be discovered only by payload or datagram examination considering the actual speech data being transmitted. Payload examination should include important information such as noise level, echo, gain, and talk-over (or double talk). Without this type of information, a call could have a very high MOS based on network conditions, even though problems like echo and noise could make the communication unacceptable. Passive payload measurement approaches are more algorithmically and computationally complex than passive network measurement approaches.
PESQ - 'Perceptual Evaluation of Speech Quality from http://www.pesq.org/
Because conducting subjective tests to assess voice quality is impractical for measuring the voice quality of a live communication system, the UC solution uses advanced algorithms to objectively predict the results of the subjective test to generate ACR MOS values. The UC solution provides two classes of MOS values, Listening Quality MOS (MOS-LQ) and Conversational Quality MOS (MOS-CQ).
MOS-LQ does not take into account any of the bi-directional effects such as delay and echo.
MOS-CQ measures the bi-directional speech quality and takes into account the listening quality in each direction and the bi-directional effects such as echo and delay.
Psytechnics (http://www.psytechnics.com/page.php?id=060307§ion=newsandevents/pressreleases/2007) is an independent Microsoft UCG Monitoring Server partner that owns patented technology at the heart of 5 ITU-T standards including PESQ and that has developed an especially effective set of algorithms to provide real-time estimates of users subjective perception (typically expressed as MOS) across a very wide range of conditions, on the basis of its own extensive subjective testing. Psytechnics performed this study as part of an ongoing benchmarking and performance analysis consulting program.
The IP impairment conditions used for the subjective experiments were derived from the ITU-T G.1050 model (http://www.itu.int/itudocr/itu-t/aap/sg12aap/history/g1050/g1050_ww9.doc).
Actual measured average packet loss ranged 0% to 25%, mean absolute jitter ranged from 0 ms to about 45 ms (with min/max jitter ranging -3 ms/3 ms to about -500 ms/500 ms), base delay was 50 ms one way and packet delay (90th percentile) ranged 0 ms to about 600 ms.
This subjective experimentation was conducted both using North American English (clean speech) and using British English with office babble at 25-dB signal-to-noise ratio. Only main results of the North American English experiment are shown here. The British English experiment led to very similar relative results.
Slide Objective: Discuss voice quality test options as it relates to the Monitoring Server reporting
Notes:
This section covers Mean Opinion Score (MOS) methodology for evaluating VoIP quality, types of reports available by default with the Monitoring server and sample reports for comparison. The sample reports are for reference only; reports from a production environment should be created based on a baseline established during preliminary testing. The baseline reports can then be used to track changes in the environment to see if VoIP quality is improving or degrading.
Actual experimental measurement of voice quality in traditional telephony and IP telephony networks is very complex and typically only done to identify an already suspected issue. The most common form of experimental measurement is active measurement, which injects a known reference signal and compares it to the degraded signal to predict perceived quality, using algorithms such as the ones previously described. That approach generally requires you to set up and run tests in parallel from the actual use of the system, by inserting measuring hardware into the network and sending actual test data across the network. Most common algorithms such as PESQ require active measurement.
Passive measurement is a newer, more complex, and less commonly used technique. Its advantage is that it allows in-place assessment of actual quality of the live traffic. In passive measurement, no reference signal is used; instead, the degraded signal is inspected as received and a model is used to predict perceived quality.
Most current passive measurement models only consider the transport layer effects (loss, jitter, and delay for the session) to estimate a MOS. Looking at the transport layer can provide a vague understanding of the quality of the call, but it does not take into account other important aspects that can be discovered only by payload or datagram examination considering the actual speech data being transmitted. Payload examination should include important information such as noise level, echo, gain, and talk-over (or double talk). Without this type of information, a call could have a very high MOS based on network conditions, even though problems like echo and noise could make the communication unacceptable. Passive payload measurement approaches are more algorithmically and computationally complex than passive network measurement approaches.
PESQ - 'Perceptual Evaluation of Speech Quality from http://www.pesq.org/
Because conducting subjective tests to assess voice quality is impractical for measuring the voice quality of a live communication system, the UC solution uses advanced algorithms to objectively predict the results of the subjective test to generate ACR MOS values. The UC solution provides two classes of MOS values, Listening Quality MOS (MOS-LQ) and Conversational Quality MOS (MOS-CQ).
MOS-LQ does not take into account any of the bi-directional effects such as delay and echo.
MOS-CQ measures the bi-directional speech quality and takes into account the listening quality in each direction and the bi-directional effects such as echo and delay.
Psytechnics (http://www.psytechnics.com/page.php?id=060307§ion=newsandevents/pressreleases/2007) is an independent Microsoft UCG Monitoring Server partner that owns patented technology at the heart of 5 ITU-T standards including PESQ and that has developed an especially effective set of algorithms to provide real-time estimates of users subjective perception (typically expressed as MOS) across a very wide range of conditions, on the basis of its own extensive subjective testing. Psytechnics performed this study as part of an ongoing benchmarking and performance analysis consulting program.
The IP impairment conditions used for the subjective experiments were derived from the ITU-T G.1050 model (http://www.itu.int/itudocr/itu-t/aap/sg12aap/history/g1050/g1050_ww9.doc).
Actual measured average packet loss ranged 0% to 25%, mean absolute jitter ranged from 0 ms to about 45 ms (with min/max jitter ranging -3 ms/3 ms to about -500 ms/500 ms), base delay was 50 ms one way and packet delay (90th percentile) ranged 0 ms to about 600 ms.
This subjective experimentation was conducted both using North American English (clean speech) and using British English with office babble at 25-dB signal-to-noise ratio. Only main results of the North American English experiment are shown here. The British English experiment led to very similar relative results.
13. Monitoring Server ReportingClasses of MOS Scores Listening Quality MOS (MOS-LQ):
Commonly used class of MOS scores for VoIP deployments
Does not consider bi-directional effects, such as delay or echo
Microsoft UC provides three wideband MOS-LQ metrics:
Network MOS: Audio played to user
Listening MOS: Audio played to user
Sending MOS: Audio sent from user
Conversational Quality MOS (MOS-CQ):
Considers Listening Quality on both ends, plus bi-directional effects
Microsoft UC provides one narrowband MOS-CQ score
Conversational MOS: Audio played to user 13 Slide Objective: Discuss MOS-LQ and MOS-CQ
Notes:
Listening MOS is a prediction of the wideband Listening Quality (MOS-LQ)) of the audio stream that is played to the user. This value takes into consideration the audio fidelity, distortion, speech, and noise levels, and from this data predicts how a large group of users would rate the quality of the audio they hear.
The Listening MOS varies depending on:
The codec used
A wideband or narrowband codec
The characteristics of the audio capture device used by the person speaking (person sending the audio)
Any transcoding or mixing that occurred
Defects from packet loss or packet loss concealment
The speech level and background noise of the person speaking (person sending the audio)
Due to the large number of factors that influence this value, it is most useful to view the Listening MOS statistically rather than by using a single call.
Slide Objective: Discuss MOS-LQ and MOS-CQ
Notes:
Listening MOS is a prediction of the wideband Listening Quality (MOS-LQ)) of the audio stream that is played to the user. This value takes into consideration the audio fidelity, distortion, speech, and noise levels, and from this data predicts how a large group of users would rate the quality of the audio they hear.
The Listening MOS varies depending on:
The codec used
A wideband or narrowband codec
The characteristics of the audio capture device used by the person speaking (person sending the audio)
Any transcoding or mixing that occurred
Defects from packet loss or packet loss concealment
The speech level and background noise of the person speaking (person sending the audio)
Due to the large number of factors that influence this value, it is most useful to view the Listening MOS statistically rather than by using a single call.
14. MOS Detail Network MOS:
Considers only network factors, such as:
Codec used
Packet loss
Packet reorder
Packet errors
Jitter
Useful for identifying network conditions impacting audio quality
Supported by all UC endpoints, except for Exchange 2007 UM
Listening MOS:
Considers codec used, capture device characteristics, transcoding, mixing, defects from packet loss/packet loss concealment, speech level, and background noise
Useful for identifying payload effects impacting audio quality 14 Slide Objective: Review MOS results and what they measure
Notes:
(Previous slide notes expand on Listening MOS.)
Network MOS is a prediction of the wideband Listening Quality Mean Opinion Score (MOS-LQ) of audio that is played to the user. This value takes into consideration only network factors such as codec used, packet loss, packet reorder, packet errors, and jitter.
The difference between Network MOS and Listening MOS is that the Network MOS considers only the impact of the network on the listening quality, whereas Listening MOS also considers the payload (speech level, noise level, etc). This makes Network MOS useful for identifying network conditions impacting the audio quality being delivered.
For each codec, there is a maximum possible Network MOS that represents the best possible Listening Quality MOS under perfect network conditions.
Slide Objective: Review MOS results and what they measure
Notes:
(Previous slide notes expand on Listening MOS.)
Network MOS is a prediction of the wideband Listening Quality Mean Opinion Score (MOS-LQ) of audio that is played to the user. This value takes into consideration only network factors such as codec used, packet loss, packet reorder, packet errors, and jitter.
The difference between Network MOS and Listening MOS is that the Network MOS considers only the impact of the network on the listening quality, whereas Listening MOS also considers the payload (speech level, noise level, etc). This makes Network MOS useful for identifying network conditions impacting the audio quality being delivered.
For each codec, there is a maximum possible Network MOS that represents the best possible Listening Quality MOS under perfect network conditions.
15. MOS Detail (continued) Sending MOS:
Considers:
Capture device characteristics
Speech level
Background noise
Useful for identifying device issues and contrasting to Listening MOS
Conversational MOS:
Considers same factors as Listening MOS plus echo, network delay, delay to jitter buffering, delay due to devices
Useful for identifying bi-directional effects 15 Slide Objective: Review MOS results and what they measure
Notes:
Sending MOS is a prediction of the wideband Listening Quality Mean Opinion Score (MOS-LQ) of the audio stream that is being sent from the user. This value takes into consideration the speech and noise levels of the user along with any distortions, and from this data predicts how a large group of users would rate the audio quality they hear.
The Sending MOS varies depending on:
The users audio capture device characteristics
The speech level and background noise of the users device
Due to the large number of factors that influence this value, it is most useful to view the Sending MOS statistically rather than by using a single value.
Conversational MOS is a prediction of the narrowband Conversational Quality (MOS-CQ) of the audio stream that is played to the user. This value takes into consideration the listening quality of the audio played and sent across the network, the speech and noise levels for both audio streams, and echoes. It represents how a large group of people would rate the quality of the connection for holding a conversation.
The Conversational MOS varies depending the same factors as Listening MOS, as well as the following:
Echo
Network delay
Delay due to jitter buffering
Delay due to devices
Due to the large number of factors that influence this value, it is most useful to view the Conversational MOS statistically rather than by using a single value.
Slide Objective: Review MOS results and what they measure
Notes:
Sending MOS is a prediction of the wideband Listening Quality Mean Opinion Score (MOS-LQ) of the audio stream that is being sent from the user. This value takes into consideration the speech and noise levels of the user along with any distortions, and from this data predicts how a large group of users would rate the audio quality they hear.
The Sending MOS varies depending on:
The users audio capture device characteristics
The speech level and background noise of the users device
Due to the large number of factors that influence this value, it is most useful to view the Sending MOS statistically rather than by using a single value.
Conversational MOS is a prediction of the narrowband Conversational Quality (MOS-CQ) of the audio stream that is played to the user. This value takes into consideration the listening quality of the audio played and sent across the network, the speech and noise levels for both audio streams, and echoes. It represents how a large group of people would rate the quality of the connection for holding a conversation.
The Conversational MOS varies depending the same factors as Listening MOS, as well as the following:
Echo
Network delay
Delay due to jitter buffering
Delay due to devices
Due to the large number of factors that influence this value, it is most useful to view the Conversational MOS statistically rather than by using a single value.
16. Max MOS Rating by Codec Network MOS scores vary considerably based on call type
Call type determines codec used
Different codecs have different max MOS ratings for Network
UC to UC appears to have better VoIP quality than UC to PSTN (RTAudio NB) when it may not 16 Slide Objective: Discuss Monitoring Server maximum MOS rating by Codec
Notes:
Some customers have reported low network MOS values, when in reality, it was near the top of what is achievable using a specific codecs. Here is a table that shows how a Unified Communicator (UC) to UC call is using the RTAudio codec in wideband mode and returns a possible max of 4.1 whereas the same UC calling to a PSTN phone would use the RTAudio codec in narrowband mode and only be able to achieve a max network MOS of 2.95.
It appears as though the UC to UC call would have much better voice quality when in fact both codec modes are operating at their maximum. RTAudio operating in narrowband mode is equivalent to an existing PBX phone running a G.711 codec.
The UC solution makes use of both narrowband (8 kHz sample rate) and wideband (16 kHz sample rate) audio codecs. In order to provide consistency in the measuring of the MOS-LQ, all of the MOS-LQ values are reported on wideband MOS-LQ scale instead of the traditional narrowband MOS-LQ scale that other systems provide.
The difference between the wideband MOS-LQ scale and narrowband MOS-LQ is the range of the experience played to the group of people who were in the subjective test. In the case of narrowband MOS-LQ, the group is exposed to speech where only narrowband codecs are used, and so the listeners lose any audio frequency content above 4 kHz. For wideband MOS-LQ, the group is exposed to speech where both narrowband and wideband codecs are used. Since listeners prefer the additional audio frequency content that can be represented in wideband audio, narrowband codecs will have a lower score on a wideband MOS-LQ score than on a narrowband MOS-LQ scale. For example G.711 is typically sited as having a narrowband MOS-LQ score of ~4.1 but when compared to wideband codecs on a wideband MOS-LQ scale, G.711 may have a score of only approximately 3.5.
Slide Objective: Discuss Monitoring Server maximum MOS rating by Codec
Notes:
Some customers have reported low network MOS values, when in reality, it was near the top of what is achievable using a specific codecs. Here is a table that shows how a Unified Communicator (UC) to UC call is using the RTAudio codec in wideband mode and returns a possible max of 4.1 whereas the same UC calling to a PSTN phone would use the RTAudio codec in narrowband mode and only be able to achieve a max network MOS of 2.95.
It appears as though the UC to UC call would have much better voice quality when in fact both codec modes are operating at their maximum. RTAudio operating in narrowband mode is equivalent to an existing PBX phone running a G.711 codec.
The UC solution makes use of both narrowband (8 kHz sample rate) and wideband (16 kHz sample rate) audio codecs. In order to provide consistency in the measuring of the MOS-LQ, all of the MOS-LQ values are reported on wideband MOS-LQ scale instead of the traditional narrowband MOS-LQ scale that other systems provide.
The difference between the wideband MOS-LQ scale and narrowband MOS-LQ is the range of the experience played to the group of people who were in the subjective test. In the case of narrowband MOS-LQ, the group is exposed to speech where only narrowband codecs are used, and so the listeners lose any audio frequency content above 4 kHz. For wideband MOS-LQ, the group is exposed to speech where both narrowband and wideband codecs are used. Since listeners prefer the additional audio frequency content that can be represented in wideband audio, narrowband codecs will have a lower score on a wideband MOS-LQ score than on a narrowband MOS-LQ scale. For example G.711 is typically sited as having a narrowband MOS-LQ score of ~4.1 but when compared to wideband codecs on a wideband MOS-LQ scale, G.711 may have a score of only approximately 3.5.
17. Monitoring Server Overview Works with both Standard Edition or Enterprise Edition Pool (any supported topology)
Requires SQL client tools on Monitoring server, if SQL database is on another server
Microsoft Systems Center Operations Manager (SCOM) infrastructure is desirable
SQL Server reporting Services (for detailed reports) 17 Slide Objective: Discuss what the Monitoring Server does and how it works
Notes:
Before you install the Monitoring Server, you must install and configure Message Queuing on each server running Monitoring Server. The Monitoring Server uses Message Queuing to ensure reliability when processing audio and video quality metrics reports.
The Monitoring Server collects quality metrics at the end of each VoIP call from the participant endpoints, including Lync 2010, the A/V Conferencing server, Mediation Server, and IP phones. These quality metrics are aggregated and stored in a SQL database for alerting and reporting purposes. The data collected is used for alerting on abnormal media quality conditions and generating media quality reports.
The Data Collection Agents are installed automatically on every Front End Server and Standard Edition server. Although agents are activated automatically, no data is actually captured unless a Monitoring Server is deployed and associated with that Enterprise pool or Standard Edition server.
Slide Objective: Discuss what the Monitoring Server does and how it works
Notes:
Before you install the Monitoring Server, you must install and configure Message Queuing on each server running Monitoring Server. The Monitoring Server uses Message Queuing to ensure reliability when processing audio and video quality metrics reports.
The Monitoring Server collects quality metrics at the end of each VoIP call from the participant endpoints, including Lync 2010, the A/V Conferencing server, Mediation Server, and IP phones. These quality metrics are aggregated and stored in a SQL database for alerting and reporting purposes. The data collected is used for alerting on abnormal media quality conditions and generating media quality reports.
The Data Collection Agents are installed automatically on every Front End Server and Standard Edition server. Although agents are activated automatically, no data is actually captured unless a Monitoring Server is deployed and associated with that Enterprise pool or Standard Edition server.
18. Monitoring Server Topology Multiple pools can report to a single Monitoring server
Single pool cannot report to multiple Monitoring Server servers
To enable QoE data to be sent and received, you must open port 5069 on your Front End pool load balancers
The database can be co-located or separate
A single monitoring server will support 300,000 users 18 Slide Objective: Provide an example topology using Monitoring Server with collocated database with multiple pools or a single pool
Notes:
As you will see from the figure:
Monitoring Server may support multiple pools and Mediation Servers
However, a pool or Mediation Server may only be associated with one Monitoring Server
The AV MCUs are part of the Standard Edition respectively consolidated Enterprise Edition Front End Servers.
Slide Objective: Provide an example topology using Monitoring Server with collocated database with multiple pools or a single pool
Notes:
As you will see from the figure:
Monitoring Server may support multiple pools and Mediation Servers
However, a pool or Mediation Server may only be associated with one Monitoring Server
The AV MCUs are part of the Standard Edition respectively consolidated Enterprise Edition Front End Servers.
19. Monitoring ServerDatabase Capacity Planning (Monitoring Server DB) Database size is dependent on call volume and call report retention settings
Each days call report uses approximately 16.8 KB of database storage per user
Estimate database size with this formula:
DB size = (DB growth per user per day)*( # of users)*(# of days)
Example; using the default call report retention time for 50,000 users
Estimated DB size = (16.8 KB/day)*(50,000 Users)*(60 days)=50.4 GB
* Values in this example are based on the capacity planning user model from the OCS 2007 R2 documentation 19 Slide Objective: Discuss how to plan for capacity
Notes:
This data is taken from R2 values, and in my opinion is going to increase; however PG have not released any new data at this time (last contact 14 July 2010 Keith Hanna)
This calculation is self-explanatory, but if you are running a Monitoring server in an environment this large you should consider using 73 or 144GB drives in the Monitoring Server.
Performance: for optimal performance it is recommended that you put these files on four physical disks:
System file and Message Queuing (MSMQ) file on the same physical disk
Monitoring Server database data file and CDR database data file on the same physical disk
Monitoring Server database log file
CDR database log file
For SQL database design, dont forget to implement SQL Best practices (e.g. separate Log files from DB files)
Slide Objective: Discuss how to plan for capacity
Notes:
This data is taken from R2 values, and in my opinion is going to increase; however PG have not released any new data at this time (last contact 14 July 2010 Keith Hanna)
This calculation is self-explanatory, but if you are running a Monitoring server in an environment this large you should consider using 73 or 144GB drives in the Monitoring Server.
Performance: for optimal performance it is recommended that you put these files on four physical disks:
System file and Message Queuing (MSMQ) file on the same physical disk
Monitoring Server database data file and CDR database data file on the same physical disk
Monitoring Server database log file
CDR database log file
For SQL database design, dont forget to implement SQL Best practices (e.g. separate Log files from DB files)
20. Monitoring Server ReportsTop Level Report Types Dashboard
Weekly
Monthly
Reporting
System Usage
Per-user Call Diagnostics
Call Reliability Diagnostics Reports
Media Quality Diagnostics Reports 20 Slide Objective: Discuss reporting categories
Notes:
These are the top level reporting categories and within each of these there are sub-categories as shown on the next slide.
Aside:
If you want to view reports that show summaries and trends of the media quality, you will need to deploy Monitoring Server Report Pack
After you have started your services for your Monitoring Server, you can deploy the Monitoring Server Report Pack to an instance of the Microsoft SQL Server 2008 Reporting Services. This step is optional.
The Monitoring Server Report Pack contains a set of standard reports that are published by Microsoft SQL Server 2008 Reporting Services. The reports are made available by a Web site that is published by the Report Manager.
You obtain information about media quality by reviewing reports that are based on the data that the Monitoring Server collects and uses to calculate mean opinion scores. If you installed SQL 2008 with Reporting Services, then you can view these reports at the Reporting Service instance that you specified during setup.
Slide Objective: Discuss reporting categories
Notes:
These are the top level reporting categories and within each of these there are sub-categories as shown on the next slide.
Aside:
If you want to view reports that show summaries and trends of the media quality, you will need to deploy Monitoring Server Report Pack
After you have started your services for your Monitoring Server, you can deploy the Monitoring Server Report Pack to an instance of the Microsoft SQL Server 2008 Reporting Services. This step is optional.
The Monitoring Server Report Pack contains a set of standard reports that are published by Microsoft SQL Server 2008 Reporting Services. The reports are made available by a Web site that is published by the Report Manager.
You obtain information about media quality by reviewing reports that are based on the data that the Monitoring Server collects and uses to calculate mean opinion scores. If you installed SQL 2008 with Reporting Services, then you can view these reports at the Reporting Service instance that you specified during setup.
21. Monitoring Server ReportsImprovements For ROI analysis and asset management
Usage reports
IP phone hardware and software versions and ownership
For Operations and Diagnostics
Dashboard
Call Reliability
Media Summary reports
For Helpdesk
Individual user activity reporting
Can be configured for automatic generation and email delivery 21 Slide Objective: Discuss reporting improvements
Notes:
Reporting can be separated out into categories to show the value add.
ROI analysis provides detailed breakdown of the numbers of sessions, minutes, and messages for each of the modalities, in both summary (dashboard) and detailed methods (reports).
Operations and diagnostics are provided via the dashboards and the call reliability sections within the reports section. These can be further filtered by choices such as location, modality etc.
From the helpdesk perspective, individual user queries can be executed showing all interactions from the specified user.
Reports can be configured for automatic generation and delivery, e.g. weekly failures.Slide Objective: Discuss reporting improvements
Notes:
Reporting can be separated out into categories to show the value add.
ROI analysis provides detailed breakdown of the numbers of sessions, minutes, and messages for each of the modalities, in both summary (dashboard) and detailed methods (reports).
Operations and diagnostics are provided via the dashboards and the call reliability sections within the reports section. These can be further filtered by choices such as location, modality etc.
From the helpdesk perspective, individual user queries can be executed showing all interactions from the specified user.
Reports can be configured for automatic generation and delivery, e.g. weekly failures.
22. Monitoring ServerService Health Monitoring Goals Accurate Alerts
Filter out transient conditions
Distinguish based on impact
Track current state (active or resolved)
Actionable Alerts
Cause and recommended actions
Relevant information to identify and isolate
Guidance for troubleshooting
Service monitoring
Component monitoring
Voice quality monitoring 22 Slide Objective: Discuss Service Health monitoring aims
Notes:
The SCOM pack is aiming to be a useful tool right from the installation rather than needing the level of previous configuration that previous versions may have needed.
Alerts are raised at different levels based on the impact we dont necessarily raise a high priority alert if there is an issue where we have resilience i.e. an impact to service rather than loss.
Conditions which get resolved automatically get have the alerts marked as resolved.
Information is included in the alerts to provide guidance for resolution.
The next few slides talk about the levels of monitoring.Slide Objective: Discuss Service Health monitoring aims
Notes:
The SCOM pack is aiming to be a useful tool right from the installation rather than needing the level of previous configuration that previous versions may have needed.
Alerts are raised at different levels based on the impact we dont necessarily raise a high priority alert if there is an issue where we have resilience i.e. an impact to service rather than loss.
Conditions which get resolved automatically get have the alerts marked as resolved.
Information is included in the alerts to provide guidance for resolution.
The next few slides talk about the levels of monitoring.
23. Monitoring ServerService Monitoring Central discovery via CMS
End to end verification
Synthetic Transactions
PowerShell
Test-cs<command>
Run with test accounts or real users
Run periodically
Failure = high priority alert
Auto resolved if successful on next attempt
23 Slide Objective: Discuss Service monitoring
Notes:
Test-csregistration
Test-csim
Test-csp2pav
Test-csaddressbookservice
Etc.
Screenshot shows the PS output from a failed test vs. successful test.Slide Objective: Discuss Service monitoring
Notes:
Test-csregistration
Test-csim
Test-csp2pav
Test-csaddressbookservice
Etc.
Screenshot shows the PS output from a failed test vs. successful test.
24. Monitoring ServerComponent Monitoring Components on individual servers
Key Health Indicators (KHI)
Categorized as service impacting aspects
Non-Key Health Indicators (non-KHI)
Categorized as non-service impacting
SCOM KHIs are classed as medium priority
Auto-resolved if return to health 24 Slide Objective: Discuss component monitoring
Notes:
Component failure is less important as there will be multiple resilient components providing a service.
For example if a conferencing MCU on a single server fails, this is a non-KHI, as there are multiple MCUs within a pool providing the same service.
Slide Objective: Discuss component monitoring
Notes:
Component failure is less important as there will be multiple resilient components providing a service.
For example if a conferencing MCU on a single server fails, this is a non-KHI, as there are multiple MCUs within a pool providing the same service.
25. Monitoring ServerVoice Quality Monitoring End user reliability and media quality experience
Call reliability CDR database
Expected Failure
Unexpected Failure
SCOM alerting
Alerts raised for higher than expected failure rates
Each alert includes CDR report link for troubleshooting
Media quality QoE database
Good quality
Poor quality
SCOM alerting
Alerts raised for higher than expected failure rates
Each alert includes QoE report link for troubleshooting 25 Slide Objective: Discuss voice quality monitoring
Notes:
all call reliability data is captured within the CDR database.
Expected failures are those such as call not answered or call rejected.
Unexpected failures are calls which are dropped.Slide Objective: Discuss voice quality monitoring
Notes:
all call reliability data is captured within the CDR database.
Expected failures are those such as call not answered or call rejected.
Unexpected failures are calls which are dropped.
26. Monitoring ServerCall Detail Record Data Improved diagnostics for all modalities
Expected Failure vs. Unexpected Failure
Registration Diagnostics
IP phone service data
Capture data from analogue devices
26 Slide Objective: Discuss range of CDR collection locations
Notes:
Lync Server 2010 provides significantly improved CDR collections, ranging from registration data right through to analogue devices connected to the gateways.
****need more info from techedunc312Slide Objective: Discuss range of CDR collection locations
Notes:
Lync Server 2010 provides significantly improved CDR collections, ranging from registration data right through to analogue devices connected to the gateways.
****need more info from techedunc312
27. Monitoring Server Reporting Dashboard Report Samples 27 Slide Objective: Provide an example of the Dashboard
Notes:
Scenarios for Dashboard
Quick trend reporting over current week/month (separate report for month).
Contains usage over last 6 weeks (weekly report) or 6 months (monthly report)
Weekly/monthly refers to the diagnostics section (bottom left and right side)
Media Quality diagnostics (bottom right)
This section is dynamic and has a minimum bar before items get added. For example, a server must be reporting at more than 1% poor quality of calls, as well as having dealt with a minimum number of calls (30).
Note the cell highlighting in this case yellow, showing more than 5% poor quality. Red would be used if significantly higher (10%)
A number of these values have hyper links for further drill down, each of these reports will run for the current week Note this runs from Sunday to Saturday, and if you view this on a Monday will only contain Sunday + Monday data.
For specific data ranges, you must use the reports section (more later)
Slide Objective: Provide an example of the Dashboard
Notes:
Scenarios for Dashboard
Quick trend reporting over current week/month (separate report for month).
Contains usage over last 6 weeks (weekly report) or 6 months (monthly report)
Weekly/monthly refers to the diagnostics section (bottom left and right side)
Media Quality diagnostics (bottom right)
This section is dynamic and has a minimum bar before items get added. For example, a server must be reporting at more than 1% poor quality of calls, as well as having dealt with a minimum number of calls (30).
Note the cell highlighting in this case yellow, showing more than 5% poor quality. Red would be used if significantly higher (10%)
A number of these values have hyper links for further drill down, each of these reports will run for the current week Note this runs from Sunday to Saturday, and if you view this on a Monday will only contain Sunday + Monday data.
For specific data ranges, you must use the reports section (more later)
28. Monitoring Server Reporting Dashboard Unique logon sessions Report Samples 28 Slide Objective: Provide an example of a linked report from the dashboard Unique logon sessions
Notes:
This report is directly linked from the dashboard, and in turn links to a daily breakdown per hour of the logon information.
Additional graphs are provided of the unique logon and unique active users.
Also, we can link directly back to the dashboard, or go to the reports page from here (this was a pain point in R2).
Other useful reports from the dashboard
Total sessions
Total conferences
Total PSTN conferences
And many failure reports which well see next.
Slide Objective: Provide an example of a linked report from the dashboard Unique logon sessions
Notes:
This report is directly linked from the dashboard, and in turn links to a daily breakdown per hour of the logon information.
Additional graphs are provided of the unique logon and unique active users.
Also, we can link directly back to the dashboard, or go to the reports page from here (this was a pain point in R2).
Other useful reports from the dashboard
Total sessions
Total conferences
Total PSTN conferences
And many failure reports which well see next.
29. Monitoring Server Reporting Dashboard Expected Failures Report Samples 29 Slide Objective: Following the previous report this takes us to the Failure distribution for the expected failures
Notes:
This report is the same whether its for expected or unexpected failures
Report categories session distribution by:
Top Diagnostics Reasons
Top Modalities
Top Pools
Top Sources
Top Components
Top From Users
Top To Users
Top User Agents
From here, the data can be drilled down further by following the hyperlinks to investigate specific failures.Slide Objective: Following the previous report this takes us to the Failure distribution for the expected failures
Notes:
This report is the same whether its for expected or unexpected failures
Report categories session distribution by:
Top Diagnostics Reasons
Top Modalities
Top Pools
Top Sources
Top Components
Top From Users
Top To Users
Top User Agents
From here, the data can be drilled down further by following the hyperlinks to investigate specific failures.
30. Monitoring Server Reporting Reports Report Samples 30 Slide Objective: Provide an example of the main reports page
Notes:
The difference between the dashboard and the reports section comes down to control. The dashboard is all about dealing with recent trends and diagnostics, whereas the reports section is more about ROI type reports system usage, device management etc.
Each of these reports provides the capability to be granular, not only in terms of dates, but also for related fields. E.g. specific sites, users, devices etc. depending on the specific report.
The bottom report bar shows fields such as interval and pool (this is the same report as the first link we saw from the monitoring page)
The first few reports here are similar to the dashboard reports, so moving onto the response group reports.
Slide Objective: Provide an example of the main reports page
Notes:
The difference between the dashboard and the reports section comes down to control. The dashboard is all about dealing with recent trends and diagnostics, whereas the reports section is more about ROI type reports system usage, device management etc.
Each of these reports provides the capability to be granular, not only in terms of dates, but also for related fields. E.g. specific sites, users, devices etc. depending on the specific report.
The bottom report bar shows fields such as interval and pool (this is the same report as the first link we saw from the monitoring page)
The first few reports here are similar to the dashboard reports, so moving onto the response group reports.
31. Monitoring Server Reporting Reports Response Group Service Report Samples 31 Slide Objective: Provide an example of the main response group service usage page
Notes:
An area of the R2 implementation of reporting which was developed based on feedback was to provide a level of details on the Response Group Service.
Here we have the weekly summary, broken down in to:
Received calls
Successful calls
Offered calls
Answered calls
Percentage of abandoned calls
Avg call length
Transferred calls
Slide Objective: Provide an example of the main response group service usage page
Notes:
An area of the R2 implementation of reporting which was developed based on feedback was to provide a level of details on the Response Group Service.
Here we have the weekly summary, broken down in to:
Received calls
Successful calls
Offered calls
Answered calls
Percentage of abandoned calls
Avg call length
Transferred calls
32. Monitoring Server Reporting Reports Call Reliability Report Samples 32 Slide Objective: Provide an example of the call reliability report
Notes:
Similar reports for Call reliability, Peer-peer reliability, and conference reliabilitySlide Objective: Provide an example of the call reliability report
Notes:
Similar reports for Call reliability, Peer-peer reliability, and conference reliability
33. Monitoring Server Reporting Reports Call Detail Report Samples 33 Slide Objective: Provide an example of the call detail report
Notes:
(almost all of these fields have tool tips when hovering the mouse over)
Specifically draw attention to the highlighted fields yellow for warning and red for issue.
Sections in this report
Call information who, when, which device, what hardware
Media line network information (location, bandwidth limits in place, wired vs. wireless, ip address, speed)
Caller device and signal metrics device used, mic/speaker info, send/receive noise levels
Callee device and signal metrics same as above
Caller client event this is the location where specific device issues are raised during a call. In this example the red highlights an issue with the speaker, and yellow is highlighting an issue with the mic. The colors are shown due to the length of time of each issue, not to indicate speaker vs. mic
Highlights issues such as echo, low speech, device howling, clipping, glitching, poor network
Callee client event as above
Slide Objective: Provide an example of the call detail report
Notes:
(almost all of these fields have tool tips when hovering the mouse over)
Specifically draw attention to the highlighted fields yellow for warning and red for issue.
Sections in this report
Call information who, when, which device, what hardware
Media line network information (location, bandwidth limits in place, wired vs. wireless, ip address, speed)
Caller device and signal metrics device used, mic/speaker info, send/receive noise levels
Callee device and signal metrics same as above
Caller client event this is the location where specific device issues are raised during a call. In this example the red highlights an issue with the speaker, and yellow is highlighting an issue with the mic. The colors are shown due to the length of time of each issue, not to indicate speaker vs. mic
Highlights issues such as echo, low speech, device howling, clipping, glitching, poor network
Callee client event as above
34. Monitoring Server Reporting Reports Call Detail Report Samples 34 Slide Objective: Provide an example of the call detail report
Notes:
Continued from previous slide
Callee client event details have been left out to save space contains the same as the caller event details
Audio stream (callee -> Caller) shows codec used, sample rate, error correction (suggesting packet loss), estimate of bandwidth, packet utility, network summary (jitter, roundtrip, packet loss occurrences). Network MOS, and the value highlighted is probably the most important MOS degradation. This provides a quick, at-a-glance view as to the impact of the network in a call. In this case significant.
(looking back at the network info on the previous slide, we can see the callee was on a wireless network)Slide Objective: Provide an example of the call detail report
Notes:
Continued from previous slide
Callee client event details have been left out to save space contains the same as the caller event details
Audio stream (callee -> Caller) shows codec used, sample rate, error correction (suggesting packet loss), estimate of bandwidth, packet utility, network summary (jitter, roundtrip, packet loss occurrences). Network MOS, and the value highlighted is probably the most important MOS degradation. This provides a quick, at-a-glance view as to the impact of the network in a call. In this case significant.
(looking back at the network info on the previous slide, we can see the callee was on a wireless network)
35. Troubleshooting 35 Slide Objective: Introduce troubleshooting section
Notes:
From the start of installation the need for troubleshooting has been reduced with the introduction of the topology builder.
Topology builder provides sanity and consistency check with configuration prior to any installation of server roles
Weve already covered some of the troubleshooting improvements
Improved Diagnostics
Slide Objective: Introduce troubleshooting section
Notes:
From the start of installation the need for troubleshooting has been reduced with the introduction of the topology builder.
Topology builder provides sanity and consistency check with configuration prior to any installation of server roles
Weve already covered some of the troubleshooting improvements
Improved Diagnostics
36. Lync Server 2010 Logger 36 Slide Objective: Discuss Lync Server 2010 logger
Notes:
Lync Server 2010 Logger provides the capability to log detailed debug data to a file on each server. The specific filters available will differ depending upon roles installed on the server.
Using Analyze Log Files will start up snooper (the reskit tool) and provide a more user friendly interface for debugging of SIP files.
In large deployments, filtering by user or FQDN can help to remove the additional noiseSlide Objective: Discuss Lync Server 2010 logger
Notes:
Lync Server 2010 Logger provides the capability to log detailed debug data to a file on each server. The specific filters available will differ depending upon roles installed on the server.
Using Analyze Log Files will start up snooper (the reskit tool) and provide a more user friendly interface for debugging of SIP files.
In large deployments, filtering by user or FQDN can help to remove the additional noise
37. Lync Server Management Shell (1 of 3) Errors output is red and usually point to what a problem is
37 Slide Objective: Discuss Lync Server Management Shell
Notes:
The detail in the errors can sometimes point to where a problem is.
If the failure output is not terribly useful or helpful, try the same cmdlet again using the verbose switch. It should help you to understand what specific actions a cmdlet is taking and may help pinpoint any problems. This is especially helpful for complex cmdlets that may do several different things (ex: enable-cstopology)
The verbose switch will show what data source a cmdlets using for any actions its taking. These datasources are usually either SQL (example above shows the CMS backend and database xds), or Active Directory Domain Services (AD DS) should there be a need to collect any information from there (user specific cmdlets usually read from/write to AD DS).
Slide Objective: Discuss Lync Server Management Shell
Notes:
The detail in the errors can sometimes point to where a problem is.
If the failure output is not terribly useful or helpful, try the same cmdlet again using the verbose switch. It should help you to understand what specific actions a cmdlet is taking and may help pinpoint any problems. This is especially helpful for complex cmdlets that may do several different things (ex: enable-cstopology)
The verbose switch will show what data source a cmdlets using for any actions its taking. These datasources are usually either SQL (example above shows the CMS backend and database xds), or Active Directory Domain Services (AD DS) should there be a need to collect any information from there (user specific cmdlets usually read from/write to AD DS).
38. Lync Server Management Shell (2 of 3) Use the verbose switch to get additional debugging information (shown in yellow)
38 Slide Objective: Discuss Lync Server Management Shell
Notes:
The detail in the errors can sometimes point to where a problem is.
If the failure output is not terribly useful or helpful, try the same cmdlet again using the verbose switch. It should help you to understand what specific actions a cmdlet is taking and may help pinpointing any problems. This is especially helpful for complex cmdlets that may do several different things (ex: enable-cstopology)
The verbose switch will show what data source a cmdlets using for any actions its taking. These datasources are usually either SQL (example above shows the CMS backend and database xds), or Active Directory should there be a need to collect any information from there (user specific cmdlets usually read from/write to AD DS).
Slide Objective: Discuss Lync Server Management Shell
Notes:
The detail in the errors can sometimes point to where a problem is.
If the failure output is not terribly useful or helpful, try the same cmdlet again using the verbose switch. It should help you to understand what specific actions a cmdlet is taking and may help pinpointing any problems. This is especially helpful for complex cmdlets that may do several different things (ex: enable-cstopology)
The verbose switch will show what data source a cmdlets using for any actions its taking. These datasources are usually either SQL (example above shows the CMS backend and database xds), or Active Directory should there be a need to collect any information from there (user specific cmdlets usually read from/write to AD DS).
39. Lync Server Management Shell (3 of 3) If the Error output and verbose switch dont lead you to the error, there is tracing output you can collect via Lync Server 2010 Logger:
PowerShell gives a bit of additional information for PowerShell cmdlets run
ADConnect Shows interactions with Active Directory should the cmdlet need to do so 39 Slide Objective: Discuss Lync Server Management Shell
Notes:
The detail in the errors can sometimes point to where a problem is.
If the failure output is not terribly useful or helpful, try the same cmdlet again using the verbose switch. It should help you to understand what specific actions a cmdlet is taking and may help pinpoint any problems. This is especially helpful for complex cmdlets that may do several different things (ex: enable-cstopology)
The verbose switch will show what data source a cmdlets using for any actions its taking. These datasources are usually either SQL, or Active Directory should there be a need to collect any information from there (user specific cmdlets usually read from/write to AD DS).
Slide Objective: Discuss Lync Server Management Shell
Notes:
The detail in the errors can sometimes point to where a problem is.
If the failure output is not terribly useful or helpful, try the same cmdlet again using the verbose switch. It should help you to understand what specific actions a cmdlet is taking and may help pinpoint any problems. This is especially helpful for complex cmdlets that may do several different things (ex: enable-cstopology)
The verbose switch will show what data source a cmdlets using for any actions its taking. These datasources are usually either SQL, or Active Directory should there be a need to collect any information from there (user specific cmdlets usually read from/write to AD DS).
40. Lync Server Control Panel Control Panel Startup errors should be troubleshot from several places:
Event Viewer (Application, Security and CS)
Internet Information Services (IIS)
Tracing using Lync Server 2010 Logger
Event Viewer: If the Control Panel fails to come up, first check the Application Event Log, Security Event Log, and Lync Server Event Logs for errors around the time the access was attempted. This could give clues as to what the issue is. 40 Slide Objective:
Notes:
Slide Objective:
Notes:
41. Lync Server Control Panel IIS:
IIS Logs for the site in question should be reviewed
Also ensure the site and app pool are running
41 Slide Objective:
Notes:
The sites on the left pane are running. Note the Default website with the stop sign icon showing its not running.
The Application pool for Bigfin is LSBigfinAppPool. Note that it shows a status of Started.Slide Objective:
Notes:
The sites on the left pane are running. Note the Default website with the stop sign icon showing its not running.
The Application pool for Bigfin is LSBigfinAppPool. Note that it shows a status of Started.
42. Lync Server Control Panel Lync Server 2010 Logger: The next step to try is to collect relevant tracing using Lync Server 2010 Logger. The following traces could help in Lync Server Control Panel startup issues:
Bigfin
BigfinPlugin
BigfinWeb
ADConnect
Powershell 42 Slide Objective:
Notes:
Slide Objective:
Notes:
43. Lync Server Control Panel Issues that happen once successfully in the Lync Server Control Panel should show up similarly to this in the application:
43 Slide Objective:
Notes:
If the errors in the Lync Server Control Panel arent descriptive enough or dont give enough information to resolve the issue, follow the other troubleshooting steps above to see what could be causing the issue.Slide Objective:
Notes:
If the errors in the Lync Server Control Panel arent descriptive enough or dont give enough information to resolve the issue, follow the other troubleshooting steps above to see what could be causing the issue.
44. Lync Server Control Panel Use PowerShell Lync Server 2010 Logger tracing while performing actions in Lync Server Control Panel to find out what PowerShell cmdlets its calling
This should give enough context to assist in scripting common actions 44 Slide Objective:
Notes:
Slide Objective:
Notes:
45. Q&A 45 Slide Objective:
Notes: Slide Objective:
Notes:
46. 46 2010 Microsoft Corporation. All rights reserved.
Microsoft, Windows, and other product names are or may be registered trademarks and/or trademarks in the U.S. and/or other countries.
The information herein is for informational purposes only and represents the current view of Microsoft Corporation as of the date of this presentation. Because Microsoft must respond to changing market conditions, it should not be interpreted to be a commitment on the part of Microsoft, and Microsoft cannot guarantee the accuracy of any information provided after the date of this presentation. This document may contain information related to pre-release software, which may be substantially modified before its first commercial release. MICROSOFT MAKES NO WARRANTIES, EXPRESS, IMPLIED OR STATUTORY, AS TO THE INFORMATION IN THIS PRESENTATION.
Unless otherwise noted, the example companies, organizations, products, domain names, e-mail addresses, logos, people, places and events depicted herein are fictitious, and no association with any real company, organization, product, domain name, email address, logo, person, place or event is intended or should be inferred. 2010 Microsoft Corporation. All rights reserved.
Microsoft, Windows, and other product names are or may be registered trademarks and/or trademarks in the U.S. and/or other countries.
The information herein is for informational purposes only and represents the current view of Microsoft Corporation as of the date of this presentation. Because Microsoft must respond to changing market conditions, it should not be interpreted to be a commitment on the part of Microsoft, and Microsoft cannot guarantee the accuracy of any information provided after the date of this presentation. This document may contain information related to pre-release software, which may be substantially modified before its first commercial release. MICROSOFT MAKES NO WARRANTIES, EXPRESS, IMPLIED OR STATUTORY, AS TO THE INFORMATION IN THIS PRESENTATION.
Unless otherwise noted, the example companies, organizations, products, domain names, e-mail addresses, logos, people, places and events depicted herein are fictitious, and no association with any real company, organization, product, domain name, email address, logo, person, place or event is intended or should be inferred.