Developing an Evaluation Instrument for e-Commerce Web Sites from the First-Time Buyer’s Viewpoint

Developing an Evaluation Instrument for e-Commerce Web Sites from the First-Time Buyer’s Viewpoint

Developing an Evaluation Instrument for e-Commerce Web Sites from the First-Time Buyer’s Viewpoint Wei-Hsi Hung and Robert J McQueen Dept. of Management Systems, The University of Waikato, Hamilton, New Zealand Abstract: This paper presents the process of developing an evaluation instrument specifically for the evaluation of e-Commerce Web sites from the first-time buyer’s viewpoint. The development process is based on theoretical discussions of the Web evaluation and Web user satisfaction literature. A draft evaluation instrument was developed. To enhance its reliability and validity, several iterative trials on e-Commerce Web sites were conducted. Some modifications were made to the instrument. The final version is capable of evaluating e- Commerce Web sites effectively. The instrument provides implications to both Web evaluation practitioners and academics. Keywords: e-Commerce, Web evaluation, user satisfaction, transaction activity, instrument 1. Introduction Web-based e-Commerce gives companies global reach and it is far less expensive than alternatives, such as electronic data interchange (Patel et al., 1998). It has become an extremely important avenue for firms in many industries to interact with their stakeholders and customers (Merwe and Bekker, 2003). As the number of transactions through e-Commerce is increasing, the design of Web sites becomes a critical success factor (Kim, Shaw and Schneider, 2003, Wan and Chung, 1998). Forrester Research (cited in Cunliffe, 2000), estimates that poor Web design will result in the loss of 50 percent of potential repeat visits, due to an initial negative experience. Rettig and LaGuardia (1999) suggested that an effective evaluation could lead to better design of electronic systems to meet users’ needs. Thus, an evaluation instrument is necessary. A number of attempts at evaluation of consumer-oriented Web sites have been developed and published in the last few years. Some were in a purely subjective form of individual preferences of the assessor, and some were in the objective form of statistical measurement, such as monitoring the download time of the site and site traffics. However, because Web sites have become more complicated and the number of Web pages has increased, these forms are not able to evaluate Web site effectively. In addition, these evaluation criteria or individual preferences may not be applied to e- Commerce Web sites because this type of Web site requires a consideration of the addition of business-related evaluation criteria (Kim et al., 2003).

This paper presents the process of developing an evaluation instrument specifically for the e- Commerce Web site from a first-time buyer’s viewpoint. We define Web site evaluation for the purpose of this paper, as the assessment and measurement of Web sites. Instead of compiling a list of detailed evaluation criteria, this paper chooses crucial criteria based on the discussion on several theoretical models from the business transaction, Web user satisfaction, and Web evaluation literature. The instrument therefore can perform effectively in its evaluation tasks. This paper starts by reviewing the literature on Web site evaluation, and Web user satisfaction. A Web satisfaction model is then suggested. This paper then discusses how this model can be applied to measure first-time buyers’ satisfaction, and how the evaluation criteria and rating systems have been chosen. The methodology used to test the reliability and validity of the model is explained. Finally, an evaluation instrument is presented for the evaluation of e-Commerce Web sites. Its strengths and limitations are also outlined.

2. Overview of web site evaluation The common issues found in the literature relating to Web site evaluation are quality (e.g. Day, 1997, Loiacono, 1999, Olsina et al., 1999, Rettig and LaGuardia, 1999, Dran et al., 1999, Cox and Dale, 2002, Mich et al., 2003); Web design (e.g. Shneiderman, 1997, Wan and Chung, 1998, Gehrke and Turban, 1999, Thelwall, 2003); and, usability (e.g. Nielsen, 1995, Palmer, 2002, Agarwal and Venkatesh, 2002, Konradt et al., 2003). Researchers have adopted Web quality concept from the quality of product or service (e.g. Loiacono, 1999, Cox and Dale, 2002, Day, 1997). For example,

ISSN: 1566-6379 31 ©Academic Conferences Limited

Electronic Journal of Information Systems Evaluation Volume 7 Issue 1 (2004) 31-42

Dran, Zhang, and Small (1999) adopted Kano’s Model of Quality as a theoretical framework to evaluate the quality of Web sites. This model separated product and service quality into three levels according to customer expectations: expected, normal, and exciting. These researchers believe that quality in a product or service is not what the provider or seller put into it, but what the client or customer receives from it. Thus, a Web site should try to satisfy its customers’ needs in order to ensure repeat visits from them, and gain their loyalty. In regard to Web design, Shneiderman (1997) provided an Objects/Actions Interface (OAI) model for Web-site design. This encourages designers of Web sites to focus on analyzing the relationship between task and Web interface. Wan and Chung (1998) looked at problems in Web design from the perspective of network analysis. They suggested that care must be taken when designing the homepage, which is the entrance to the Web site. A homepage should keep the center or median in a Web site. Gehrke and Turban (1999) suggested five major categories that should be considered when designing a Web site for a business: page loading, business content, navigation efficiency, and security and marketing/consumer focus. They argued that page loading is the most important factor in Web-site design. Thelwall (2003) suggested shifting the focus on evaluating Web design from individual pages to aggregated collections based upon Web directories, domains, and entire site. Undertaking a usability study usually needs high consumer or user involvement, and sometimes the study needs to be conducted in an experimental environment. Nielsen (1993, 1995) provided guidelines and criteria to evaluate the usability of Web sites design and suggested that every design project, including Web site development, should be subjected to usability testing and other validation methods. Toh and Pendse (1997) also suggested that Web pages should be designed for usability and understanding. However, Web sites with good usability cannot guarantee users’ preference (Tullis, 1998). Although some researchers have tried to provide ways of evaluating e-Commerce Web sites specifically (e.g. Boyd, 2002, Merwe and Bekker, 2003), the selection of evaluation criteria still requires more theoretical justification. Overall, frameworks and criteria have been proposed to evaluate e-Commerce Web sites. However, few have evaluated the

Web site from a first-time buyer’s viewpoint. There is a need to provide theoretical justifications when selecting adequate evaluation criteria for e-Commerce Web sites.

3. Web user satisfaction For any business, the key to success is repeat business from the same customers (Barnes, 1999). It is the same in the Web environment. A Web site can be considered successful if users are satisfied and revisit it. Satisfied users may spend longer at a Web site, may revisit the Web site later, and may recommend the Web site to others (Zhang et al., 1999). It is crucial to determine what makes a user satisfied with the Web site, as well as what are potential causes of dissatisfaction. To this end, Web evaluators must first know who the users are, what the key goals of those users are, and then they have to know what steps the users are going to take to use that site (Bacheldor, 2000).

3.1 Who are the users? The users of a Web site are various groups, such as suppliers, buyers, shareholders, or stakeholders. It is also very important to distinguish between first-time, intermittent, and frequent buyers of a Web site (Shneiderman, 1997). For example, first-time buyers usually need an overview to understand the range of services, and to know what is not available, and buttons to select actions. In contrast, frequent buyers demand shortcuts or macros to speed-repeated tasks, compact in-depth information and extensive services to satisfy their varied needs. The user group focused on in this paper is first-time buyers.

3.2 What is first-time buyers’ goal? The goal of first-time buyers is to conduct e- Commerce transaction activities. Several models and frameworks have been proposed to categorize the activities conducted by buyers in the e-Commerce transaction process (Gebauer and Scharl, 1999, Schubert and Selz, 1997, Schubert and Selz, 1999, Lincke, 1998, Liu et al., 1997). For example, Gebauer and Scharl (1999) described the e-Commerce transactions process including information, negotiation, settlement, and after-sales phases. Schubert and Selz (1997, 1999) and Schubert and Dettling (2002) divided online transaction process into information, agreement, settlement, and community phases. Merwe and Bekker (2003) regard the transaction process as consisting of need recognition, gathering information, evaluating ISSN: 1566-6379 32

Wei-Hsi Hung and Robert J McQueen

information, and making the purchase. Overall, these models share a certain degree of similarity. This paper has adopted the four- phase model suggested by Gebauer and Scharl (1999). Schubert and Dettling (2002) also adopted this model as a basis to extend their original model. Details of each phase are described as follows. The information phase comprises both searching for a particular electronic catalog or information, and locating required information and commodities within the Web site. Buyers seek and collect information on potential products or services in this phase. The Web functions supporting the activities in this phase are, for example, the company overview, product catalogs, news releases, and the financial statements. The negotiation phase serves to establish a contract, fixing details such as product specifications, and payment. Buyers seek transaction information and decision support by assessing the value of special offerings, by identifying new bargaining options, and by increasing the negotiations. The Web functions supporting these activities are, for example, email addresses, phone numbers, fax numbers, and online communication applications that support the buyer to be able to deal with the suppliers online. In the settlement phase, transaction activities and procedures, which are part of the contract, are comparatively well defined. Web sites to support transaction settlement include extranet systems, and various tools to process orders internally and between transaction partners, facilitate order tracking, and support payment processes. The Web functions are, for example, the payment function, the document exchange, and the order status. In the after-sale phase, proper access to the transaction file is crucial. Without this, communication problems and delays can occur. The electronic support of after-sale activities is diverse. It ranges from simple electronic mail services to automated helpdesks and sophisticated electronic maintenance manuals. The Web functions supporting the activities in this phase include, for example, the email service, electronic maintenance manuals, FAQs, and training programs. Based on a hierarchical decomposition (Shneiderman, 1997) of user’s activities in these phases, nineteen activities are specified

for completing transactions on the e- Commerce Web sites (see Table 1). Table 1: Activities in each transaction phases Transaction Phase


Information 1.find news; 2.find information for specific subjects; 3.find a new product’s information; 4.find a new product’s price; 5.find a known product’s information; 6.find a known product’s price; 7.overview company; 8.check financial status.

Negotiation 1.negotiate contract; 2.negotiate price; 3.negotiate volume; 4.negotiate delivery date.

Settlement 1.conduct payment; 2.monitor the goods or services; financial documentation.

After-Sale 1.find maintenance information; 2.ask questions; expression; 4.request training program

3.3 Why are they satisfied? Kim et al. (2003) suggested that the factors that affect user satisfaction on the Web are attractiveness and informativeness. Attractiveness is defined as the quality of physical settings of the Web site that attracts customers and/or involvement (Kim et al., 2003). It depends on three criteria: customization, interactivity, and vividness. Informativeness is defined as logical settings of the Web, which provide visitors with useful and understandable information (Kim et al., 2003). It comprises three evaluation criteria: understandability, reliability, and relevance. Zhang et al. (1999) adapted Herzberg’s Two- Factor Theory to explain the difference between satisfaction and dissatisfaction. Job dissatisfaction occurs when a group of “hygiene” factors are absent (Zhang et al., 1999). Hygiene factors describe extrinsic factors that impact on employees’ relationship to the context or environment where they do their jobs. These hygiene factors remove job dissatisfaction; however, they do not cause people to become highly satisfied and motivated in their work. In contrast, job satisfaction is determined by a group of intrinsic factors named “motivators” (Zhang et al., 1999). Motivators describe employees’ relation to what they are doing. One example of this used by Zhang et al. (1999) is that fast loading time will not result in user dissatisfaction, but may not be enough to guarantee user satisfaction. ©Academic Conferences Limited 33

Electronic Journal of Information Systems Evaluation Volume 7 Issue 1 (2004) 31-42

In addition, Zhang et al. (1999) identified three components contributing to Web user satisfaction or dissatisfaction with a Web interface: information seeking strategy, user characteristics, and Web environment. The strategy or approach a person uses to seek information may be analytic (planned, goal- driven, deterministic, and formal) or browsing (opportunistic, data driven, heuristic, informal and continuous) (Zhang et al., 1999). The Web interface that supports these two strategies is different. For the analytic searching strategy, it is heavily dependent on the functionality of search engine algorithms, while the browsing strategies require a Web user interface that supports “easy and flexible control, high-quality display, and rapid response time”. The factors of user characteristic and Web environment can be considered as either a hygiene or motivating factor, depending on the individual differences (Zhang et al., 1999).

4. Development of evaluation instrument

This paper proposes a model to demonstrate and show how an e-Commerce Web site can satisfy its buyers (see Figure 1).

To find the function on the


To conduct transaction activities

To produce customer


Figure 1: The proposed satisfaction model This model shows that business buyers will: firstly, find what function or information they want; secondly, use the Web function or information to conduct transaction activities; thirdly, feel satisfied; and finally, find another function or further information. The cycle will continue until buyers finish all their business activities. Although this model shows how an e- Commerce Web site satisfies its buyers, it does not show how buyers’ satisfaction is measured. This paper suggests a two-step process for measuring satisfaction. Firstly, three failure points are identified in the proposed satisfaction model in order to measure whether the Web site can satisfy

buyers to complete a transaction activity. Secondly, several evaluation criteria are presented according to the three failure points. They provide more detailed measurements on the degree of satisfaction that buyers perceive when they reach each failure point. The following sections will discuss more details on each step of the two-step process.

4.1 Identify three failure points Three failure points are identified in the proposed satisfaction model, which are numbered 1, 2, and 3 in Figure 2.

Failure Point 3

Failure Point 1

Failure Point 2

To find the function on the


To conduct transaction activities

To produce customer


Figure 2: The three failure points in the

proposed satisfaction model Failure point 1 occurs when: buyers cannot access the Web site, or buyers cannot find the function or information they want. Failure point 2 occurs when: the function does not work, or buyers do not know how to use the function or the information is useless. Failure point 3 occurs when buyers do not feel satisfied although they may not be dissatisfied. All three failure points can be applied to first- time buyers. Because these users are new to the Web site, the failure point 1 can be applied to them when they are using the Web site to perform transaction activities. It is not applied to frequent buyers (see the discussion in Section 3.1) who have conducted some transaction activities before and they know where can find the functions. These failure points measure different degrees of satisfaction. According to Herzberg’s (cited in, Zhang et al., 1999) motivation-hygiene theory, “not dissatisfied” does not equal “satisfied” and “not satisfied” is not the same as “dissatisfied”. In other words, there should be a place between satisfied and dissatisfied (see Figure 3). ISSN: 1566-6379 34

Wei-Hsi Hung and Robert J McQueen

Section III Section II Section I


Failure point 1, 2


Failure point 3

Figure 3: The three failure points and

customer satisfaction Failure points 1 and 2 can be used to measure whether the buyer is dissatisfied with the site. The failure point 3 is used to measure whether the buyer is satisfied with the site. Section I represents those buyers, who are dissatisfied with the Web function, because they can not complete the required activity by using it. Section II covers those buyers who can conduct their activities or the Web function fulfills their needs. However, they do not want to use other functions. Section III represents those buyers who are satisfied by the function. They will try to find other functions to complete the rest of their business activities.

4.2 Choice of evaluation criteria Although many evaluation criteria are proposed in the literature, the focus here is to choose those criteria which serve to determine whether the first-time buyer can pass through the three failure points in Figure 2 to become satisfied with the Web site. Four suitable criteria have been chosen: ease-of- identification, ease-of-use, usefulness and interactivity. Ease-of-identification is used to measure whether the buyer has passed through failure point 1. Ease-of-use is used to measure whether the buyer has passed through failure point 2. Usefulness and interactivity are used to measure whether the buyer has passed through failure point 3. Table 2 shows the ability of each criterion to measure the three failure points.

Table 2: The ability of each criterion to measure the three failure points Ease-of-identification Ease-of-use Usefulness Interactivity

Failure point 1 Yes No No No Failure point 2 No Yes No No Failure point 3 Yes Yes Yes Yes

Ease-of-identification has two meanings in this paper. It includes connectivity, and ability of identification. Connectivity is whether the Web site can be accessed reliably and pages load quickly. Ability of identification is the measurement of how easy it is to identify the function from a Web page. Ease-of-use refers to how easy it is to use the function to achieve the goal of buyers. One of the best ways to illustrate this is to compare moving around in a physical store. For example, buyers are able to get to the checkout counter immediately when they have finished shopping. Similarly, in the Web site, the buyers can always get back to the home page from wherever they are; they also get help quickly when they have questions. Usefulness refers to whether a Web application would be helpful to the buyers in accomplishing their intended purposes (Lu and Yeung, 1998). Relevant questions are, for example, does it have the functionality which meets buyers’ needs; do the Web pages provide sufficient information about the products and services being promoted, such as the size, color, materials, quality? Interactivity is concerned with how the Web site interacts with the buyers. Three levels of interactivity are identified in this paper: static,

dynamic, and interactive contents. Static content, like printed words on a magazine, is a one-way relationship to the buyer (Rachman and Buchanan, 1999). It includes service, and company information. Static content is made only by the Web site provider, and provides the static information, which fulfills buyers’ needs. Up-to-date information belongs to this category, for example, new product advertisements, and recent news. Even these types of information change dynamically. However, it is still one-way presentation. Static contents have the lowest interactivity. Dynamic content is a two-way presentation with buyers. It provides information that instructs or interacts with buyers, for example, customized information and requirement, communication, and transaction functions. Some interactive functions, which include searchable databases, e-mails, and the booking service, are categorized into dynamic content in this paper, because they are a two- way presentation. Interactive content is a two-way communication between buyers and Web providers in real-time situation. It concerns getting the right information to the right person, in the right format, at the right time. This requires sophisticated Web technology. An example is a chat room, which provides the ©Academic Conferences Limited 35

Electronic Journal of Information Systems Evaluation Volume 7 Issue 1 (2004) 31-42

communication between the Web provider and its buyers. Functions belonging to this category have the highest level of interactivity.

4.3 Choice of scoring systems According to the previous discussion on satisfaction and dissatisfaction (see Figure 3), the effectiveness of the four criteria (ease-of- identification, ease-of-use, and usefulness) is based on the degree of satisfaction the buyer perceives. To measure a degree, it is therefore more appropriate to use a multiple-scale scoring system rather than two-scale (e.g. Yes or No). Modified five-point Likert scales have therefore been chosen for this purpose.

Previous work applied original five-point Likert scales (1 to 5) to evaluate the effectiveness of different criteria in Web environment, such as user satisfaction (e.g. Sing, 2004), and ease- of-use (e.g. Misic and Johnson, 1999, Dai and Grundy, 2003, Lii, Lim, and Tseng, 2004). Instead of using the original scoring system (1 to 5), this paper has adapted a system ranging from 0 to 10. The purpose here is to make a larger variance between the results obtained from the Web site evaluation. The evaluator will receive the benefit of it being easier to monitor the gap between superior and poor designed Web functions and Web sites. Together with the criterion interactivity, which is a multiple-choice measure, the scores of each scale for each criterion are shown in Table 3.

Table 3: Scoring systems for the four criteria Criterion Levels and Scores Ease-of- identification

very easy, right away (10.0)

easy (7.5)

normal (5.0)

very difficult (2.5) can not find (0)

Ease-of-use very easy, no help needed (10.0)

easy, no help needed (7.5)

normal, can use, but need help (5.0)

very difficult, need much help (2.5)

do not know how to use or does not work (0)

Usefulness of information

very useful (10.0) useful (7.5)

normal (5.0)

not useful (2.5) no usefulness or can not find (0)

Interactivity interactive content (10.0)

dynamic content (2.0)

static content (2.0)

Based on the scoring systems in Table 3 and the nineteen transaction activities in Table 1, a draft evaluation instrument was developed. It comprises a series of questions asking how effective the buyer perceived the evaluation criteria after conducting the nineteen transaction activities. Several iterative field tests were conducted to enhance the instrument’s reliability and validity. A group of management, computer science, and education students in New Zealand were involved. To enhance reliability, they were asked to evaluate the same e-Commerce Web site within twenty minutes using the draft instrument. If there was a significant difference among their results, some modifications would be made to the instrument. Then, they were asked to evaluate on another Web site until the difference of the results was not significant. To enhance the validity, one management student was asked to use the instrument to evaluate forty e-Commerce Web sites and differentiate them. In the end, many suggestions were received and contributed to modifying the instrument. Some of the nineteen transaction activities identified previously were combined, and fourteen transaction activities were selected. Three instructions were added to the instrument. The final version of the evaluation instrument is shown in Appendix 1.

5. Conclusion This paper has focused on developing an evaluation instrument for e-Commerce Web sites from a first-time buyer’s viewpoint. It has proposed a useful evaluation instrument. As the importance of e-Commerce increases, the instrument will be especially important for those businesses that are currently embracing e-Commerce to evaluate their Web sites. Not only can it differentiate the ability of the site to support first-time buyers to conduct transaction activities, but also measures how well each Web function supports the transaction activities. The instrument has several strengths. Firstly, it can evaluate different types of e-Commerce Web sites. Guideline-based models are generally grounded on practical experience. These guidelines usually assess “good” or “bad” Web resources, particularly in the usability test. The limitation of this kind of model comes from the difficulty in applying it to various kinds of sites. Compared to this kind of model, the proposed instrument is capable of assessing miscellaneous sites. Secondly, the evaluation instrument does not need to access specific information in the company (such as company’s marking strategy) to select evaluation criteria. As Bauer and Scharl (2000) have noted, designing evaluation criteria ISSN: 1566-6379 36

Wei-Hsi Hung and Robert J McQueen

usually requires access to company information, which frequently is not available. The evaluation instrument has overcome this difficulty. Thirdly, it is easy to use. Usually, evaluators need specific background about the terms used in the frameworks when using the evaluation frameworks. However, it has been through several iterative tests. Evaluators can use this instrument easily by following the steps and descriptions within it and without knowing specific terms. Fourthly, the evaluation time is less when using the instrument to assess sites in comparison with using other evaluation models (e.g. Merwe and Bekker, 2003). Finally, it is a cheap evaluation instrument in comparison with some evaluation software or services. However, the proposed instrument has some limitations. Firstly, it assumes that evaluators search information based on a browsing strategy, not an analytical strategy. Buyers with browsing strategy undertake an information seeking approach that depends heavily on the information environment and the buyer’s recognition of relevant information. They do not depend on the functions of search engines, unlike the analytical strategy which depends on careful planning, recall of query terms, iterative query reformulation, and an examination of the result (Zhang et al., 1999). Thus, future research should focus on developing another instrument based on an analytical strategy. Secondly, some Web functions may not be accessed because they are password protected or are required to conduct an actual transaction with the company, for example, the order online, chat with a seller, or the payment function. Thus, their usefulness and ease-of-use can not be evaluated fully. Even the proposed evaluation instrument has provided specific criteria to measure. However, the full range of measurement is not created until they are accessed. Finally, platforms of e-Commerce are still in the stage of evolution. Dominant players, such as Cisco, Dell, IBM, and Ariba, are continually developing newer generation of platforms. The fourteen Web functions and transaction activities chosen in the evaluation form might need to be extended in the future. More effective Web functions have to be added in and calculated when one uses the form to measure e-Commerce Web sites. In conclusion, the evaluation instrument is capable of evaluating e-Commerce Web sites. It is based on a theoretical discussion, and can assist an evaluator to oversee the site easily. This instrument can also be applied to

evaluate sites from diverse industries. It can be employed more often to evaluate e-Commerce sites in the future.

References Agarwal, R. and Venkatesh, V. (2002)

“Assessing a firm’s Web presence: A heuristic evaluation procedure for the measurement of usability” Information Systems Research, Vol 13 No 2 pp 168-186.

Bacheldor, B. (2000) “Web-site design: Simplicity pays for business-to- business sites” Informationweek, News Release 14 February 2000, Available from esig2.htm [Accessed 28 August 2000].

Barnes, H. (1999) “Getting past the hype: Internet opportunities for b-to-b markets” Marketing News, Vol 33 No 3 pp 11-12.

Bauer, C. and Scharl, A. (2000) “Quantitative evaluation of Web site content and structure” Internet Research: Electronic Networking Applications and Policy, Vol 10 No 1 pp 31-43.

Boyd, A. (2002) “The goals, questions, indicators, measures (GQIM) approach to the measurement of customer satisfaction with e-commerce Web sites” Aslib Proceedings, Vol 54 No 3 pp 177-187.

Cox, J. and Dale, B. G. (2002) “Key quality factors in Web site design and use: An examination” International Journal of Quality & Reliability Management, Vol 19 No 7 pp 862-888.

Cunliffe, D. (2000) “Developing usable Web sites: A review and model” Internet Research: Electronic Networking Application and Policy, Vol 10 No 4 pp 295-307.

Dai, X. and Grundy, J. (2003) “Customer perceptions of a thin-client micro- payment system: Issues and experiences” Journal of End User Computing, Vol 15 No 4 pp 62-77.

Day, A. (1997) “A model for monitoring Web site effectiveness” Internet Research: Electronic Networking Applications and Policy, Vol 7 No 2 pp 1-9.

Dran, G. M., Zhang, P. and Small, R. (1999) “Quality Web sites: an application of the Kano model to Web-site design” Paper presented at the Fifth Americas Conference on Information Systems, Milwaukee USA.

Gebauer, J. and Scharl, A. (1999) “Between flexibility and automation: An ©Academic Conferences Limited 37

Electronic Journal of Information Systems Evaluation Volume 7 Issue 1 (2004) 31-42

evaluation of Web technology from a business processes perspective” Journal of Computer-Mediated Communication, Vol 5 No 2, Available from: http://www. /issue2/gebauer.html [Accessed 7 April 2000].

Gehrke, D. and Turban, E. (1999) “Determinants of successful Web-site design: relative importance and recommendations for effectiveness” Paper presented at the Proceedings of the 32nd Hawaii International Conference on system Sciences, Hawaii USA.

Kim, S. E., Shaw, T. and Schneider, H. (2003) “Web site design benchmarking within industry groups” Internet Research: Electronic Networking Applications and Policy, Vol 13 No 1 pp 17-26.

Konradt, U., Wandke, H., Balazs, B. and Christophersen, T. (2003) “Usability in online shops: Scale construction, validation and the influence on the buyers’ intention and decision” Behaviour & Information Technology, Vol 22 No 3 pp 165-174.

Lii, Y. S., Lim, H. J. and Tseng, L. P. D. (2004) “The effects of Web operational factors on marketing performance” Journal of American Academy of Business, Vol 5 No 1/2 pp 486-494.

Lincke, D. M. (1998) “Evaluating integrated electronic commerce systems” Electronic Markets, Vol 8 No 1 pp 7- 11.

Liu, C., Arnett, K. P., Capella, L. M. and Beatty, R. C. (1997) “Web sites of the Fortune 500 companies: Facing customers through home pages” Information & Management, Vol 31 No 6 pp 335-345.

Loiacono, E. T. (1999) “WebQual: a Web quality instrument” Paper presented at the Fifth Americas Conference on Information Systems, Milwaukee USA.

Lu, M. T. and Yeung, W. L. (1998) “A framework for effective commercial Web application development” Internet Research: Electronic Networking Applications and Policy, Vol 8 No 2 pp 166-173.

Merwe, R. v. d. and Bekker, J. (2003) “A framework and methodology for evaluating e-commerce Web sites” Internet Research: Electronic Networking Applications and Policy, Vol 13 No 5 pp 330-341.

Mich, L., Franch, M. and Gaio, L. (2003) “Evaluating and designing Web site

quality” IEEE MultiMedia, Vol 10 No 1 pp 34-43.

Nielsen, J. (1993) Usability Engineering, Academic Press, Boston.

Nielsen, J. (1995). Multimedia and hypertext: the Internet and beyond, AP Professional, Boston.

Olsina, L., Godoy, D., Lafuente, G. J. and Rossi, G. (1999) “Specifying quality characteristics and attributes for Web sites” Paper presented at the International Conference on Software Engineering, Los Angeles USA.

Palmer, J. W. (2002) “Web site usability, design, and performance metrics” Information Systems Research, Vol 13 No 2 pp 151-167.

Patel, J., Schenecker, M., Desai, G. and Levitt, J. (1998) “Tools for growth in e- commerce” Informationweek, Vol 712 pp 91-104.

Rachman, Z. M. and Buchanan, J. (1999) “Effective tourism Web sites” Department of Management Systems Research Report Series, Number 99- 12, University of Waikato, Hamilton, N.Z.

Rettig, J. and LaGuardia, C. (1999) “Beyond ‘Beyond Cool’: Reviewing Web resources” Online, Vol 23 No 4 pp 51- 55.

Schubert, P. and Dettling. (2002) “Extended Web assessment method (EWAM): Evaluation of e-commerce applications from the customer’s view point” International Journal of Electronic Commerce, Vol 7 No 2 pp 51-80.

Schubert, P. and Selz, D. (1997) “Web assessment: A model for the evaluation and the assessment of successful electronic commerce applications” Electronic Markets, Vol 7 No 3 pp 1-17.

Schubert, P. and Selz, D. (1999) “Web Assessment: Measuring the effectiveness of electronic commerce sites going beyond traditional marketing paradigms” Paper presented at the Proceedings of the 32nd Hawaii International Conference on system Sciences, Hawaii USA.

Shneiderman, B. (1997) “Designing information-abundant Web sites: issues and recommendation” Human- Computer Studies, Vol 47 pp 5-29.

Sing, C. K. (2004) “The measurement, analysis, and application of the perceived usability of electronic store” Singapore Management Reviews, Vol 26 No 2 pp 49-64. ISSN: 1566-6379 38

Wei-Hsi Hung and Robert J McQueen

Thelwall, M. (2003) “A layered approach for investigating the topological structure of communities in the Web” Journal of Documentation, Vol 59 No 4 pp 410- 429.

Toh, L. Y. and Pendse, S. (1997) “An empirical evaluation of Web page design and consumer behaviour” Paper presented at the 1st Annual Collected Workshop on Electronic Commerce, Adelaide Australia.

Tullis, T. S. (1998) “A method for evaluating Web page design concepts” Paper presented at the ACM SIGCHI Conference on Human Factors in

Computing Systems, Los Angeles USA.

Wan, H. A. and Chung, C. W. (1998) “Web page design and network analysis” Internet Research: Electronic Networking Applications and Policy, Vol 8 No 2 pp 115-122.

Zhang, P., Small, R. V., M, D. G. and Barcellos, S. (1999) “Web sites that satisfy users: A theoretical framework for web user interface design and evaluation” Paper presented at the Proceedings of the 32nd Hawaii International Conference on system Sciences, Hawaii USA. ©Academic Conferences Limited 39

Electronic Journal of Information Systems Evaluation Volume 7 Issue 1 (2004) 31-42

Appendix 1 The Final Version of Web Evaluation Instrument Step 1: Find all the following Web functions in column 1. If it is found on the homepage, place

a “√” in Column 2. If not, jump to the next function.

Column 1 Column 2 Column 3 Column 4 No Web functions Where? Activity C.1 C.2.1 C.2.2 C.3 1.1 Company Overview

(about us) To find the information which

introduces the company. (then use criteria form 1)

1.2 Financial Information (investor information or annual report)

To find the financial information about the company. (then use criteria form 1)

1.3 Privacy (privacy policy)

To find the privacy description. (then use criteria form 1)

1.4 Product Catalog To find one product. Is the price shown in the catalog? YES, NO; Can order? YES (jump to 2.1), NO; (then use criteria form 1).

1.5 New Product Announcement

To find one item of new product. (then use criteria form 1)

1.6 News (what’s new)

To find one item of news. (then use criteria form 1)

1.7 Learning Information

To find the information which provides knowledge to help learning. (then use criteria form 1)

2.1 Order (Negotiation) To find the information about how to order the product. (then use criteria form 2)

3.1 Payment To find the information about how to make payment. (then use criteria form 2)

3.2 Monitoring Goods (order status)

To find the information about how to monitor goods. (then use criteria form 2)

3.3 Exchange Document

To find the information about how to exchange document. (then use criteria form 2)

4.1 Maintenance (customer support)

To find the information about how to maintain the product. (then use criteria form 1)

4.2 Training Information To find the information about how to train the users of the product. (then use criteria form 1)

4.3 FAQ of Customer Support

To find the descriptions of FAQ for customer support. (then use criteria form 1)

Step 2: Conduct those activities, which have ticked, by clicking its function item. Then, complete Column 4 by using criteria forms 1. If other functions are found when conducting activities, write a note to describe under what hyperlink item, into Column 2. If the function is

a password protected, then use criteria form 2 to evaluate. Step 3: Conduct those activities, which are found during the activities, and then complete Column 4 (If the function found does not work, it is scored 0 totally).

Criteria Form 1

Criterion 1: How easy is it to use the function to find one piece of information?

Criterion 2.2: How useful is the information found?

A – Very easy. B – Easy. C – Not easy. D – Difficult. E – The function could not work.

A – The content of the information is three times the screen. B – The content of the information is two times the screen. C – The content of the information is one screen. D – The content of the information is less than one screen. E – Useless. ISSN: 1566-6379 40

Wei-Hsi Hung and Robert J McQueen

Criterion 2.1: How informative is the Web function?

Criterion 3: Describe the function and the information found after conducting the activity.

A – Very informative. The function comprises more than 10 subfunctions. Each subfunction is a hyperlink which links to more specific subjects. B – Informative. The function comprises 5 – 10 subfunctions. Each subfunction is a hyperlink which links to more specific subjects. C – Not very informative. The function comprises 2 – 5 subfunctions. Each subfunction is a hyperlink which links to more specific subjects. D – The function is only a one page presentation. E – Useless.

The function has: A – Search engine: there is a specific search engine provided to search previous information (not the general search function to search the whole Web site) The information found comprises (multiple choice): B – Hyper-links in the text: at least one hyperlinks exists in the final text and provides links to other resources. C – Interactive function: e-mail provided at the end of the information, which is used to inquire about information or give feedback. D – Real-time communication function: there is a function providing communication with the service persons directly.

Criteria Form 2 (for evaluating password-protected functions)

Criterion 1: Is there any helpful instruction provided to guide as to how to use the function?

Criterion 3: Describe the function and the characteristics found on the Web page where the function is located.

A – Yes, much helpful information provided, which has more than 10 Web pages. B – Yes, some helpful information provided, which has 2 – 10 Web pages. C – Yes, a little helpful information provided, which has only one Web page. D – Yes, but only the phone number or e-mail address provided. E – No, there is no information which introduces how to use the function.

The function is (choose A or B): A – The function comprises some information, but it does not provide direct interaction with the company. B – It is a function to interact data with company directly. What characteristics are found on the Web page where the function is located? (multiple choice) C – Phone or fax numbers provided at the end of the information, which is used to inquire about further information. D – E-mail provided at the end of the information, which is used to inquire about information or give feedback. E – Real-time communication function: there is a function providing communication with service persons directly.

Scoring Systems for Each Criterion

Criterion in Criteria Form 1 Criterion in Criteria Form 2 Level 1 2.2 2.2 3 1 2.2 2.2 3 A 10.0 10.0 10.0 2.0 10.0 − − 2.0 B 7.5 7.5 7.5 2.0 7.5 − − 2.0 C 5.0 5.0 5.0 2.0 5.0 − − 2.0 D 2.5 2.5 2.5 10.0 2.5 − − 2.0 E 0 0 0 − 0 − − 10.0 ©Academic Conferences Limited 41

Electronic Journal of Information Systems Evaluation Volume 7 Issue 1 (2004) 31-42 ISSN: 1566-6379 42

  • Introduction
  • Overview of web site evaluation
  • Web user satisfaction
    • Who are the users?
    • What is first-time buyers’ goal?
    • Why are they satisfied?
  • Development of evaluation instrument
    • Identify three failure points
    • Choice of evaluation criteria
    • Choice of scoring systems
  • Conclusion
  • References
  • Appendix 1 The Final Version of Web Evaluation Instrument

Comments are closed.