1 / 19

What online work environment might best suit you?

What online work environment might best suit you?. A brief summary of the project stages. Starting point Tens , if not hundreds , of different programmes are available A quick overview resulted in the selection of the most potential Preliminary study

tekla
Download Presentation

What online work environment might best suit you?

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. What online work environment might best suit you?

  2. A brief summary of the project stages • Startingpoint • Tens, ifnothundreds, of differentprogrammesareavailable • A quickoverviewresulted in the selection of the mostpotential • Preliminarystudy • The softwares’ basicsinvestigated • 23 different software selected for thisstage • The mostpotential of thesewereselected for a broaderexamination, based on opinions of the evaluationgroup and client • A broaderstudy • A detailedaccount of the eight (8) mostpotential software • Backgroundstudy, testing and writtenreport • Applicability to variousscenariosdefined, therebyprovidingrecommendations

  3. Methodology • Studybased on: • comparison of featuresbased on evaluationcriteria • observations of evaluationgroup • collaborationbetweenpartnernetwork and suppliers of onlinemeeting and conference software • discussionswithcolleaguesusingvariousonlinemeeting and conferenceprogrammes • internetsources and discussionsbetweenclient and evaluationgroup. • The mostimportantmethodwas, however, practicaltestingindividually and collectively, and recordingobservations • Simulation of allsituationswasnotcompletelysuccessful (e.g. 100+ webinar)

  4. Evaluation criteria • Comparison of technicalfeaturesformed the startingpoint for the evaluation • Main categories of evaluationcriteria • Technicalevaluation • Administrativeevaluation • Interactivityevaluation • Functionalityevaluation • Costevaluation • The programmeswererankedundereach feature • Summarytable • Listsallimportantfeaturesgenerally • Scenariotables • Specificfeaturesemphasised in specificscenarios • Featuresnonessential to a scenariowerenotevaluated • Certainfeatureswereemphasisedmorethanothersusing a weightingcoefficient

  5. Evaluation scenarios • Pair work/small groups (2 – 3 individuals) • Two or three people usually in different locations physically engage in dialogue on, for example, issues relating to work or study • Meeting/negotiation (4 – 10 individuals) • There are usually several participants and various documents to deal with • The recording of, e.g., messages and observations written in the space is emphasised • Ease of pacing and allocating participants’ turns to speak is also emphasised • Teaching use, lecture, training (11-20 individuals) • Interaction and a clear user interface facilitate comprehension in teaching uses • Diverse tools enhance the learning process • Flexible and quick management of user rights is also of vital importance • Meeting/conference (100-> individuals) • In large meetings the programme’s role diminishes as various technical requirements are emphasised. However, it must be possible to follow the conference and elements of interaction are highlighted.

  6. Comparison results • A 44-page report on the results of the comparison was compiled. A detailed evaluation table is attached to the report • The results of the general evaluation and scenario specific evaluations are summarised here • Placings are based solely on comparison of features and their ranking

  7. Average of overall ranking (general evaluation + scenarios)

  8. General evaluation

  9. Scenario specific evaluation • Scenario 1 (small group/pair work, 2-3 individuals)

  10. Scenario specific evaluation • Scenario 2 (meetings, 4-10 individuals)

  11. Scenario specific evaluation • Scenario 3 (teaching situations, 11-20 individuals)

  12. Scenario specific evaluation • Scenario 4 (seminars, 100+)

  13. Programme rankings

  14. Testing • In testingsituations the evaluationgroupprimarily made observations at a general level of the programmes for eachscenario • Therewerefivetestersaltogether, fromsevendifferentmachines • Testswereconductedusing the followingoperatingsystem and browsergroupings: • Windows XP Pro, Firefox • Windows XP Pro, Internet Explorer 6 • Windows Vista Business, Firefox • Mac OSX, Safari • Ubuntu Linux, Firefox • The aimwas to testallessentialfunctions and howwelltheywork in differentgroupings

  15. Collection of data and experience • The evaluationgroup’spreviousexperiences of the variousprogrammesformed to a largeextent the basis of comparison • Wecollectedcommentsfrom as broad a group as possiblethroughpartnernetworks • Wefamiliarisedourselveswithexistingstudies, testreports and materialfromsuppliers • In the case of someprogrammeswewere in contactwithprogrammesuppliers (ACP/Humac, Elluminate) oractiveusers (iLinc/Kemi-Tornionamk, WebEx/TKK, WebEx/Blackboard) in order to participate in guidedpresentationsorauthenticsituations and to accrueexperience • In addition to the evaluationgroup’sviews, scenariospecificemphasesalsonoted the emphases of nineotherindividuals

  16. Result reliability • For the sake of comparison and to ensurereliability of the results, the evaluationgroupalsoestimatedrankings • A comparison of the general and scenariospecificevaluationswith the evaluationgroup’s general estimationsshowssomediscrepancies • In the smallgroupscenarioDimdim and OpenMeetings • In meetingsituations, the above and alsoGoToMeeting and MS OfficeLive • In teachingsituationsElluminate • Perhaps the greatestdivergence in the evaluationgroup’splacingswas in the general evaluation. Everyoneseemed to havetheirownfavourite, and discrepancieswith the feature comparisontablewere the greatest

  17. Conclusions • Generally, the basicfeatures of all the programmesaresimilar and theyareapplicable to allusersituations. Ranking the programmeswasnoteasy. • Adobe Connect Pro was the clearwinner, withiLinc in secondplace. • Therewerepotentialchallengers in the group, especially in certainscenarios. • Itwasinteresting to note the opensourceOpenmeeting’shigh ranking in the comparison, despiteitshaving just begunitsdevelopment. • Operatingsystemdependencywas a general problem in the majority of programmes. • An interestingdeficiencyseemed to beevident in all, apartfrom ACP and Office Live: the evaluationgroupconsideredsharednotesone of the mostimportantfeatures and the otherprogrammeslackedthis.

  18. Further information: • Thisstudy is the Summer University of Häme’ssubproject • ”Virtualmeetingenvironments and Online ITK’’ in the Open Networks for • Learningproject. • The studywassub-contracted to Mediamaisteri Group Oyj. • Summer University of Häme • Hanne Murto • Hanne.murto@hame.fi • The Association of FinnisheLearning Centre • info@eoppimiskeskus.fi • Mediamaisteri Group Oyj • marko.makila@mediamaisteri.com

More Related