General Areas of Interest:
- Web service management
- Business Intelligence & Data warehousing
- Content management
- XML Query Processing & optimization
- XML database management
- Data query in Mobile computing & Sensor networking
- Distributed Transaction management Systems
Topic 1: Big data storage allocation in Cloud computing
The challenge to efficiently archive and manage data is intensifying with the enormous growth of data. The demand for big data storage and management has become a challenge in today's industry. There are multiple types of information and the number of locations stored on the Cloud. Especially, an increasing number of enterprises employ distributed storage systems for storage, management and sharing huge critical business information on the cloud. The same document may be duplicated in several places. The duplication of documents is convenient for retrieval and efficient. However, it will be difficult to update multiple copies of same documents once the data has been modified. How does the data management provide the retrieval of data stored in different locations consistently, efficiently and reliably is a complicated task with multiple objectives. One important open problem is how to make the systems load balancing with minimal update cost. Furthermore, how to make the systems be elastic for effectively utilizing the available resources with the minimal communication cost. Providing effective techniques for designing scalable, elastic, and autonomic multitenant database systems is critical and challenging tasks. In addition, ensuring the security and privacy of the data outsourced to the cloud are also important for the success of data management systems in the cloud.
Topic 2: Adopting NoSQL for Big data management
Big data is well on its way to enormous. It has the great potential to utilize big data for enhancing the customer experience and transform their business to win the market. Big data enables organizations to store, manage, and manipulate vast amounts of data to gain the right knowledge.
Big data is a combination of data-management technologies evolved over time.
How does a company store and access big data to the best advantage? Are traditional DBs still the best option? What does it mean to transform massive amounts of data into knowledge? Obviously, the big data requirements are beyond what the relational database can deliver for the huge volume, highly distributed, and complex structured data. Traditional relational databases were never designed to cope with modern application requirements — including massive amounts of unstructured data and global access by millions of users on mobile devices that require geographic distribution of data.
In this research, we will identify the gap between Enterprise requirements and traditional relational database capabilities to look for other database solutions. We will explore the new technology NoSQL data management for big data to identify the best advantage. We will gain an insights into how technology transitions in software, architecture, and process models are changing in new ways.
Topic 3:Top-k queries in uncertain big data
Effectively extracting reliable and trustworthy information from Big Data has become crucial for large business enterprises. Obtaining useful knowledge for making better decisions to improve business performance is not a trivial task. The most fundamental challenge for Big Data extraction is to handle with the data certainty for emerging business needs such as marketing analysis, future prediction and decision making. It is clear that the answers of analytical queries performed in imprecise data repositories are naturally associated with a degree of uncertainty. However, it is crucial to exploit reliability and accurate data for effective data analysis and decision making. Therefore, this project is to develop and create new techniques and novel algorithms to extract reliable and useful information from massive, distributed and large-scale data repositories.
Topic 4: Feature-based recommendation framework on OLAP
The queries in Online Analytical Processing (OLAP) are user-guided. OLAP is based on a multidimensional data model for complex analytical and ad-hoc queries with a rapid execution time. Those queries are either routed or on-demand revolved around the OLAP tasks. Most such queries are reusable and optimized in the system. Therefore, the queries recorded in the query logs for completing various OLAP tasks may be reusable. The query logs usually contain a sequence of SQL queries that show the action flows of users for their preference, their interests, and their behaviours during the action.
This research project will investigate the feature extraction to identify query patterns and user behaviours from historical query logs. The expected results will be used to recommend forthcoming queries to help decision makers with data analysis. The purpose of this research is to improve the efficiency and effectiveness of OLAP in terms of computation cost and response time.
A CS Research Topic Generator
How To pick A Worthy Topic In 10 Seconds
Computer Science is facing a major roadblock to further research. The problem is most evident with students, but afflicts many researchers as well: people simply have a tough time inventing research topics that sound sufficiently profound and exciting. Many PhD students waste needless years simply coming up with a thesis topic. And researchers often resort to reading documents from government grant agencies so they will know what to work on for the next proposal!
Good news for the CS community: the problem has at last been solved. The table below provides the answer.
To generate a technical phrase, randomly choose one item from each column. For example, selecting synchronized from column 1, secure from column 2, and protocol from column 3 produces:
Best of all, two phrases can be combined with simple connectives, making the result suitable for the most demanding use. Possible connectives include:
For example, one could generate a thesis title by selecting a second phrase and a connective:
The technique described here for selecting a research topic is far superior to the method currently in use because it can be automated -- a computer program can be written to select a phrase at random whenever one is needed. Furthermore, thanks to an enchancement by Ian Stark at The University of Edinburgh in Scotland, it is possible to automate an additional step in the research process by performing an automated literature search. Try the system by first generating a random topic and then performing an automated literature search.