To ensure that the web services provided are efficient and cost effective, the QoS has to be implemented so as to derive an algorithm with lower and upper limits. Given that S represents the number of inquiries made into the system at any given time t, then the total number of service requests inquired at any given time t, becomes SR×t. From this, it can be deduced that the cost of the service, Cs, is given by;
Cs = SR×t
Since the main goal is to achieve the highest service exploitation at the least cost possible, let Es be the service exploitation and SA be the number of stops reached for every request made. Thus the service exploitation, Es is given by;
From these, the algorithm for the competence of the services, Cs can be derived as;
SC = Es = SA×t
Now since the main objective is to reach as many stops as possible while maintaining the service at a low cost, a conscience should be reached between ES and CS. assuming that the total number of stops is N and that all the stops in the clusters are reachable, then two clusters can be described as 2/3N and 1/3N. Thus when one service request is made, the service competence becomes;
1/3N ÷2/3N =N/2
Consequently, assuming that there are G (t) groups at any time t, then service competence can be redefined as;
SC = Es = SA×t
Meanwhile, SR×t should be maintained at levels that less than or equal to G (t).
Idyllically, in any ad-hoc network that has N number of stops and G (t) groups at any given time t, then there must exists a perfect clustering algorithm that is accurate enough to classify all stops into groups. When one stop is chosen as the representative stop for every group for the purpose of an acclimatizing service, then SC and Cs can be redefined as;
SC = SA×t and Cs = G (t)
In the case that all the stops are able to reach the representative stops in their respective groups at a time t0, then
SA×t0 =S, and SC = SA×t0 = SA/C (t0)
This therefore becomes the maximum value of service achieved. However, this may not always be the case since an expected probability that some stops in the same cluster may not be reachable from the service request may exist.
This dissertation is intended to be completed in fifteen weeks. The first three weeks will be for making notes on all the research done pertaining to the study as well as identifying the appropriate web services and platforms that will be used for the experiment. A draft will then be made from these notes and the findings. The following three weeks will comprise of further research and preparation of a second draft based on the first research and the results drawn from the experiment. The next seven weeks will be allocated for completion of research and preparation of a final draft. And finally, the last two weeks will be set aside for completion and binding of the final dissertation for presentation.
Since the data stored comes in many forms, resides in numerous devices and is accessed by many users, there is no single solution to the protection of this data. The solution is multi-phased just as the problem is therefore appropriate policies should be defined, enforced and monitored so as to ensure maximum data protection. Since it has been discovered from the research that most of the search results are below standard and as a result they do not successfully meet the users’ requirements. So as to ensure that the search results meet the standards of the users, the current technology used in searching should be improved. Appropriate schema for automatic data classification should be invented so as to enable automatic classification of data into searchable categories. This will not only improve the search but the accuracy of the results as well. Unstructured data that has not been accessed for long periods of time should be deleted from the dynamic databases so as to create more space for active data. For as much as new technologies are invented to ensure quality protection of unstructured data, individuals should also be educated on the risks of storing their data in unsecured databases. Governments should impose new policies on privacy of data so as to ensure that data stored in the cloud is well protected. Data protection should be both in transportation and at rest. But before this is put into consideration, apposite tools have to be development so as to determine what the current abilities are e.g. whether they can deliver as expected or they can only work on restricted cases. Data should be categorized in a manner that it will be simple and efficient to locate and search.
The eminence of internet-based and rich media applications has led to the sudden growth of unstructured data, formless data that includes PDF files, slide decks, videos, web pages, images, MP4 and Mp3 files and word documents among others, that in turn require secure and more reliable storage and retrieval solutions. Even though several researches have been done, sufficient research has not been conducted on the scalability and dependability of web services. The objectives of this thesis include; the determination of a Confidentiality Integrity & Availability (CIA) assurance model as well as to come up with a new algorithm that is scalable for the minimization of web service cost using the Service Level Agreement (SLA) and the Quality of Service (QoS). All the objectives have successful been met; service competence has been obtained while maintaining minimum cost of exploitation.