مجلة للنشر
https://www.igi-global.com/journal/journal-database-management/1072
https://www.semanticscholar.org/paper/Challenges-in-database-design-with-Letkowski/2c5272bbbb109180d967e8e43742c89aebf795b1
http://ceur-ws.org/Vol-567/invited1.pdf
https://www.managementstudyguide.com/database-applications-history.htm
https://www.academia.edu/4067423/Database_Management_Concepts_and_Design
https://iopscience.iop.org/article/10.1088/1742-6596/803/1/012030/pdf
عناوين للبحث
Conventional database management systems
Spatial Database Management Systems
Database design -- optimization and normalization
Contemporary Issues in Database Design and Information Systems Development
Abstract:
The use of a central repository for data and a database management system that enables programs to share data more efficiently and ensures the consistency of the data across the system is discussed. Relational databases, which represent both entities and relationships using tables, are described. It is shown that by proper design of the relational tables, the redundancy and wasted space of many ad hoc databases can be eliminated. Data are retrieved from a relational database through the use of a procedural query language. The leading query language for relational databases, the Structured Query Language (SQL), is examined.< >
https://ezlibrary.ju.edu.jo:2078/document/84095
Abstract:
Today is the era of the Internet of Things (IoT). The recent advances in hardware and information technology have accelerated the deployment of billions of interconnected, smart and adaptive devices in critical infrastructures like health, transportation, environmental control, and home automation. Transferring data over a network without requiring any kind of human-to-computer or human-to-human interaction, brings reliability and convenience to consumers, but also opens a new world of opportunity for intruders, and introduces a whole set of unique and complicated questions to the field of Digital Forensics. Although IoT data could be a rich source of evidence, forensics professionals cope with diverse problems, starting from the huge variety of IoT devices and non-standard formats, to the multi-tenant cloud infrastructure and the resulting multi-jurisdictional litigations. A further challenge is the end-to-end encryption which represents a trade-off between users' right to privacy and the success of the forensics investigation. Due to its volatile nature, digital evidence has to be acquired and analyzed using validated tools and techniques that ensure the maintenance of the Chain of Custody. Therefore, the purpose of this paper is to identify and discuss the main issues involved in the complex process of IoT-based investigations, particularly all legal, privacy and cloud security challenges. Furthermore, this work provides an overview of the past and current theoretical models in the digital forensics science. Special attention is paid to frameworks that aim to extract data in a privacy-preserving manner or secure the evidence integrity using decentralized blockchain-based solutions. In addition, the present paper addresses the ongoing Forensics-as-a-Service (FaaS) paradigm, as well as some promising cross-cutting data reduction and forensics intelligence techniques. Finally, several other research trends and open issues are presented, with emphasis on the need for proactive Forensics Readiness strategies and generally agreed-upon standards.
https://ezlibrary.ju.edu.jo:2078/document/8950109
-----
E-business: A New Challenge for Database Management Systems
A database management system (DBMS) is the software providing facilities for managing the data used in the various information system (IS) processes.
The main goal of a DBMS is to achieve independence between programs implementing IS processes and the physical representation of the data (Date & Hopewell, 1971 ANSI, 1978).
In fact, the programs can retrieve, create or delete a piece of data without knowing how it is stored in the disk or any other storage device. The DBMS facilities include data modelling, data retrieving and manipulation and others like security, efficiency, etc.
During the last three decades, many generations of DBMS have been developed. In general, the criteria used to distinguish the different generations are the data models. The rst generation used the hierarchical or network data model in which data is structured as mathematical trees or graphs (Bachman, 1969; Taylor & Frank, 1976). The last ones are using the relational data model (Codd 1970; Astrahan et al., 1976) or the object-oriented one (Atkinson et al., 1989; Bancilhon et al., 1992; Cattel et al., 1997) or a mixture of these two models is called the object-relational data model (Stonebraker et al., 1996).
The DBMS technology has also evolved to meet the new hardware and software evolutions. So, the rst generation of DBMS allowed the implementation of IS processes in a centralised environment. These kinds of environments are composed of costly large computers called mainframes. The users are directly connected to this computer by means of specific terminals that can not do anything but callup the programs implemented in the mainframe. The growth of networking technology and personal computers leads to a transformation of database applications and IS architectures through the client/server protocol (Dewire, 1993, Chapell et al., 1994).
6. Concluding remarks World Wide Web technology is an effective revolution for information systems. For several years, e-commerce was just a set of tools like EDI (Electronic Data Interchange) for data and information exchange. The e-commerce was then the business of commercial organisations including suppliers, nal vendors and did not concern the nal consumer. Nowadays, This nal consumer is also an information system actor since he or she directly interacts with the automated information system processes. Such a change is both a technical and managerial one. In this paper, we tried to highlight some technical aspects concerning the databases and the web integration. We have shown that the DBMS must evolve to provide infrastructures allowing integration and interoperability of legacy and probably heterogeneous software components. The major actors of software commerce believe in the future of such infrastructures and are developing standards for their specication and development.
uring object-relational database physical structure design, problems are caused by three factors: ambiguity of transformations of conceptual model, multiplicity of quality assessment criteria, and a lack of constructive model. In the present study a constructive hierarchical model of physical database structure has been developed. Implementations are used in XML, SQL and Java languages. Multi-criterial structure optimisation method has also been developed. Structure variation space is generated using transformation rule database. Prototype has been implemented within the framework of the research. [ABSTRACT FROM AUTHOR]
https://www.sciendo.com/article/10.2478/acss-2018-0004
Traceability between Code and Design Documentation in Database Management System: A Case Study
Abstract: Traceability builds many strong connections or links between requirements and design, so the main purpose of traceability is to maintain consistency between a high level conceptual view and a low level implementation view. The purpose of this paper is to have full consistency between all components over all phases in the oracle designer tool by allowing traceability to be carried out not only between the requirements and design but also between the code and design. In this paper, we propose a new methodology to support traceability and completeness checking between code and design of oracle database applications. The new algorithm consists of a set of interrelated steps to initialize the comparison environment. An example of a student information System is used to illustrate the work.
Abstract:
Data storage management methods in a mobile environment is a key technology research area. Mobile environments have multi-source and must be self-reliant to meet environmental characteristics. Data storage management and distribution should concentrate on characteristics and efficient data storage methods. This is needed to provide a safe and speedy operation of storage infrastructure for mobile data applications. Advanced data storage management methods are essential to meet the data needs of an increasingly mobile environment. The provision of efficient and reliable data storage infrastructure is vital to run mobile networks in order to promote the development of China's mobile network technology. [ABSTRACT FROM AUTHOR]
Recently the digital libraries are created for diverse communities and in the different fields, in which large numbers of users that are distributed geographically can access the contents of large and diverse repositories of electronic objects such as, images, audio, video and files. Storage and copying of information are done either by downloading or by printing from a master file. In this work MySQL database and PHP language is used to design the web site of university (Madent Al-Elem University College) and as a part of this web site is an electronic library that specified for the students of this university only. [ABSTRACT FROM AUTHOR]
Abstract (Arabic):
في الوقت الحاضر يتم إنشاء المكتبات الرقمية لمختلف المجتمعات وفي مختلف المجالات، حيث يتم توزيع أعداد كبيرة من المستخدمين جغرافيا لمحتويات مستودعات كبيرة ومتنوعة من الكائنات الإلكترونية مثل الصور والصوت والفيديو والملفات. يتم تخزين ونسخ المعلومات عن طريق التحميل أو عن طريق الطباعة من الملف الرئيسي. في هذا العمل يتم تصميم موقع على شبكة الإنترنت للجامعة )كلية مدينة العلم الجامعة( وكجزء من هذا الموقع هو مكتبة إلكترونية تحدد لطلاب هذه الجامعة فقط باستخدام .php ولغة )MySQL( قاعدة البيانات [ABSTRACT FROM AUTHOR]
It is surprisingly hard to obtain accurate and precise measurements of the time spent executing a query because there are many sources of variance. To understand these sources, we review relevant per-process and overall measures obtainable from the Linux kernel and introduce a structural causal model relating these measures. A thorough correlational analysis provides strong support for this model. We attempted to determine why a particular measurement wasn’t repeatable and then to devise ways to eliminate or reduce that variance. This enabled us to articulate a timing protocol that applies to proprietary DBMSes, that ensures the repeatability of a query, and that obtains a quite accurate query execution time while dropping very few outliers. This resulting query time measurement procedure, termed the Tucson Timing Protocol Version 2 (TTPv2), consists of the following steps: (i) perform sanity checks to ensure data validity; (ii) drop some query executions via clearly motivated predicates; (iii) drop some entire queries at a cardinality, again via clearly motivated predicates; (iv) for those that remain, compute a single measured time by a carefully justified formula over the underlying measures of the remaining query executions; and (v) perform post-analysis sanity checks. The result is a mature, general, robust, self-checking protocol that provides a more precise and more accurate timing of the query. The protocol is also applicable to other operating domains in which measurements of multiple processes each doing computation and I/O is needed. [ABSTRACT FROM AUTHOR]
Social workers oftentimes experience a disconnect between the work they do and how that work is captured by agency databases. The database design described in this article attempts to remedy some of those issues-specifically, having to enter identical client information more than one time, not having a way to capture complex family relationships, and not having a way to capture evolving client relationships overtime-while being mindful of ideal database design principles, particularly the concept of database normalization. This alternative design grew out of an ongoing research project and is discussed with in the context of one of the involved agencies. [ABSTRACT FROM AUTHOR]
Developing countries often face challenges in applying technology projects at the local level. This is particularly relevant to projects related to health services, such as hospital systems, since any implemented service is likely to directly affect citizens or patients. This paper reports on an investigation capturing insights from people working at hospitals in developing countries that have undergone a transition from a paper based system to the implementation of a Database Management System (DBMS). Most rural hospitals in Nigeria still use paper as a means of creating patient records manually. One such hospital, Sapele General Hospital is considering moving towards a DBMS and is used as a case study to capture the challenges, opportunities, issues and concerns of people working at a hospital considering implementing a DBMS. The paper is informed by a literature review covering relevant previous DBMS implementations in hospital systems and the challenges they faced. It is also informed by interviews and surveys of both people working at hospitals in developing countries that have implemented a DBMS and people from the case study considering such an implementation. The paper provides various contributions. First, it provides insights and guidance on issues, benefits, challenges and practical considerations in moving from a paper based hospital records system to an electronic system - informed by previous implementations - and which can be used to inform similar hospital implementations of DBMSs. Secondly, it provides insights on current concerns and challenges that hospitals face in moving from paper based systems to electronic DBMSs. It will further capture a balanced perspective on some of the likely benefits and challenges in implementing a DBMS within the context of developing countries. [ABSTRACT FROM AUTHOR]
This article discusses the challenges for Database Management in the Internet of Things. We provide -scenarios to illustrate the new world that will be produced by the Internet of Things, where physical objects are fully integrated into the information highway. We discuss the different types of data that will be part of the Internet of Things. These include identification, positional, environmental, historical, and descriptive data. We consider the challenges brought by the need to manage vast quantities of data across heterogeneous systems. In particular, we consider the areas of querying, indexing, process modeling, transaction handling, and integration of heterogeneous systems. We refer to the earlier work that might provide solutions for these challenges. Finally we discuss a road map for the Internet of Things and respective technical priorities.
Database management, design and information systems development are becoming an integral part of many business applications. Contemporary Issues in Database Design and Information gathers the latest development in the area to make this the most up-to-date reference source for educators and practioners alike. Information systems development activities enable many organizations to effectively compete and innovate, as new database and information systems applications are constantly being developed. Contemporary Issues in Database Design and Information Systems Development presents the latest research ideas and topics on databases and software development. The chapters in this innovative publication provide a representation of top notch research in all areas of the database and information systems development.
تجمع القضايا المعاصرة في تصميم قاعدة البيانات والمعلومات أحدث التطورات في المنطقة لجعل هذا المصدر المرجع الأكثر حداثة للمعلمين والممارسين على حد سواء. تمكن أنشطة تطوير نظم المعلومات العديد من المنظمات من المنافسة والابتكار بشكل فعال ، حيث يتم باستمرار تطوير تطبيقات قواعد البيانات وأنظمة المعلومات الجديدة. تعرض القضايا المعاصرة في تصميم قواعد البيانات وتطوير نظم المعلومات أحدث الأفكار والموضوعات البحثية حول قواعد البيانات وتطوير البرمجيات. تقدم الفصول في هذا المنشور المبتكر تمثيلًا لأفضل الأبحاث في جميع مجالات تطوير قواعد البيانات وأنظمة المعلومات.
Database technology is one of the fastest growing areas of computer science, is also one of the most widely used technology, it has become the core of the computer information system and application system technology and the important basis. In this paper, we discuss all the important aspects of database design process, including requirements analysis phase; The conceptual design stage; The logical design phase; The physical design phase; Database implementation stage; Database operation maintenance phase six stages, and puts forward various problems occurred in the database design, and a variety of ways to solve these problems were analyzed.
https://www.researchgate.net/publication/272050902_The_Database_Design_and_Optimization
This research is done to find a simple solution how to find a normalization techniques are appropriate in database design, normalization techniques has several steps of which are forms of abnormal, normalization first, normalization 2st and normalization 3st, only 3 stages rare to be discussed in this study, as in lectures often find their students do not understand to implement this normalization techniques. The results of this study include determining the database data structures, forming sql (structural query language) by using MySQL DBMS and prototype transaction model form.
https://www.researchgate.net/publication/331437238_NORMALIZATION_IN_DATABASE_DESIGN
The brModelo tool is a initiative of the UFSC Database Group. Its first version was developed in 2005, and its main purpose is to help teaching of relational database design. Compared to similar tools, its main differentials are the support to all steps of the classical database design methodology, user interaction during the logical design step, as well as the support to all extended Entity-Relationship concepts. With more than fifteen years of existence, the brModelo was very well-accepted by the brazilian Database community, which motivated the development and release of several versions of the tool. This article presents the history of brModelo, including its available versions and their functionalities. Additionally, we detail its functionalities and compare it with popular related tools.
https://www.researchgate.net/publication/354621893_brModelo_An_Initiative_for_Aiding_Database_Design
The methodology of database design in organization management systems
https://iopscience.iop.org/article/10.1088/1742-6596/803/1/012030/pdf
Design methods for the new database era: a systematic literature review.
Over the last decade, a range of new database solutions and technologies have emerged, in line with the new types of applications and requirements that they facilitate. Consequently, various new methods for designing these new databases have evolved, in order to keep pace with progress in the field. In this paper, we systematically review these methods, with a view to better understanding their suitability for designing new database solutions. The study shows that while research in the field has expanded continuously, a range of factors still require further attention. The study identified important criteria in database design and analyzed existing studies accordingly. This analysis will assist in defining and recommending key areas for future research, guiding the evolution of design methods, their usability and adaptability in real-world scenarios. The study found that current database design methods do not address non-functional requirements; tend to refer to a preselected database; and are lacking in their evaluation.
In this paper, we performed a systematic literature review in order to explore the current state of research regarding database design methods in the new database era; to identify future trends; and to propose key areas for future research. This systematic literature review was based on a set of criteria developed through the state-of-the-art literature, as well as from best practices in the feld. Our fndings show that while relatively few studies have been conducted in the feld thus far, the subject is defnitely garnering more research attention. Some gaps still exist; our analysis of existing studies has helped to identify these gaps and to defne important areas for future research in the feld. This study calls for increased research into design methods for the new database era. In particular, it suggests that methods should be more general, address non-functional requirements, set the ground for deciding the appropriate databases, and be evaluated more rigorously.
Knowledge-Based Approaches to Database Design.
Abstract:
Database design is often described as an intuitive, even artistic, process. Many researchers, however, are currently working on applying techniques from artificial intelligence to provide effective automated assistance for this task. This article presents a summary of the current state of the art for the benefit of future researchers and users of this technology. Thirteen examples of knowledge-based tools for database design are briefly described and then compared in terms of the source, content, and structure of their knowledge bases; the amount of support they provide to the human designer; the data models and phases of the design process they support; and the capabilities they expect of their users. The findings show that there has apparently been very little empirical verification of the effectiveness of these systems. In addition, most rely exclusively on knowledge provided by the developers themselves and have little ability to expand their knowledge based on experience. Although such systems ideally would be used by application specialists rather than database professionals, most of these systems expect the user to have some knowledge of database technology. [ABSTRACT FROM AUTHOR]
Contributions to Logical Database Design.
Abstract:
This paper treats the problems arising at the stage of logical database design. It comprises a synthesis of the most common inference models of functional dependencies, deals with the problems of building covers for sets of functional dependencies, makes a synthesizes of normal forms, presents trends regarding normalization algorithms and provides a temporal complexity of those. In addition, it presents a summary of the most known keys' search algorithms, deals with issues of analysis and testing of relational schemes. It also summarizes and compares the different features of recognition of acyclic database schemas.
A Decision Support Method for Evaluating Database Designs.
The article presents a study on the structured and systematic method for database designs' evaluation and selection. The study uses the multi-criteria decision support method which helps the developers of database for the improvement of database's quality, and in making more informed decisions at the time of database development. Furthermore, the study discusses the Analytic Hierarchy Process (AHP) in making decision by using complex problem model as a hierarchical structure.
Optimization is seemingly everywhere and yet elusive. Our bodies, tools, and institutions are now understood as endlessly optimizable. But what does optimization mean? Or more crucially, what does it do? Who or what is optimized or dis-optimized? This themed issue introduces optimization as a critical concept to analyze the governance and governmentality of large technological infrastructures, platforms, and self-management apps. We define optimization as a form of calculative decision-making embedded in legitimating institutions and media that seek to actualize optimal social and technical practices in real time. Our Introduction outlines the techniques, legitimations, and social practices of optimization that have spread in many forms across the globe. By questioning optimization, our Introduction considers the social practices, geopolitical networks, and forms of organization (and violence) shored up by the desire for optimum performance. [ABSTRACT FROM AUTHOR]
Database design -- optimization and normalization
Database management :
Database development:
Database Development
https://www.researchgate.net/publication/354682086_Database_Development
Advanced Database Management Systems: Disks
Summarized from Lecture 3
Disks and files
are the topic of discussion in this informational write-up. In
the study of database management systems (DBMSs), a database
administrator (DBA) will realize that database management systems have
several tasks. Their main task, although this is debatable, is to store
information on hard disks. This operation has a lot of implications for DBMS
designs because we have to consider the performance of the storage
operation as well as the retrieval operation.
Database administrators (DBA) also consider the performance of a few other
operations, but these are reserved for another discussion. A DBA chief
concern should be about the disk that is being used to store data. To put it
bluntly, disks have to read data that is in turn, used for various purposes.
Reading data involves the transfer of data from disk to main memory with
help of a Buffer Manager. We call the main memory random access memory
or RAM for short. Further, disks also write data. Writing data is the transfer of
data from RAM to disk. An easy way of remembering this is: you write to
RAM, you read from RAM, and the Buffer Manager makes it happen.
Both reading and writing operations are high-cost relative to in-memory
operations. The costs are in operating system resources. These various
resources cannot be exhausted, or your system will slow down and possibly
crash. Database administrators as well as database management systems
work hard in the process of designing schemas, to keep these high-cost
operations at a minimum level, to prevent the main operating system from
crashing.
The minimum level is assured to be the optimized level. How do we know it is
the optimized level? In Computer Science, possibly other fields, there exist
the concept of something running at an efficient rate or something optimized
to run at an efficient rate. The efficient rate is either a maximum rate, or a
minimum rate. Both of these rates can be achieved through some type of
optimization.
(PDF) Advanced Database Management Systems: Disks. Available from: https://www.researchgate.net/publication/349915152_Advanced_Database_Management_Systems_Disks [accessed Oct 17 2021].
DATABASE MANAGEMENT
Abstract
Data management is essential for health services research. Data sources such as inpatient records, insurance claims, and large national surveys form the basis for majority of research studies addressing questions related to health care access, delivery and outcomes. The purpose of these secondary data sources is generally not specific to the myriad studies that use them. Therefore, steps such as recoding, and merging of data sources, are necessary. In addition, for both secondary and primary sources of data, research ethics require measures to ensure data integrity and security. Because research results are only as reliable as the data supporting it, these data management activities serve as the engine upon which reliable research depends. Yet, funding and staffing database management within new centers of research can be challenging. Research centers within non-research intensive institutions face the added challenge of building subject expertise in addition to technical expertise. This article will provide guidelines for establishing a data management unit within a new center of research. Funding and staffing mechanisms and the importance of mentoring and collaboration will be discussed.
https://www.researchgate.net/publication/275582511_Database_Management
Impact of database management in modern world
Abstract
Database management systems take into account the power of data and dissemination of data control systems, all of which can be achieved through beneficial program operation without risk or risk. The use of database management tools has exploded belatedly, given the potential impact of these imaginary phenomena. Database management systems are based on improvements brought about by integration. These systems provide essential means of simultaneously communicating to study the work, produce better performance, and coordinate accreditation. Proper programming can be used effectively to enhance benefits and incorporate the much-needed understanding expected to control the downward axis of dynamic cuts. The article provides a broad perspective on the application of databases in various sectors.
https://www.researchgate.net/publication/343323880_Impact_of_database_management_in_modern_world
A blockchain-based database management system
The software and hardware applications are clearly on the way of becoming an integral tool of business, communication and popular culture in many parts of the world. People are interacting with the environment via the Internet to perform physical activities remotely. These applications are hosted in the public or private servers under the control of the server admin. The users’ online usage data can be stored in public or private cloud platforms, used for processing and monitoring users’ online behaviour and emotional factors and shared with third parties to facilitate making their business decisions. When users allow their data to be collected via software applications and mobile devices, users need to have some level of trust and control over their data. But, software applications or mobile devices connected to the cloud server using client–server architecture does not ensure the reliability, security and integrity among their data. To get over these kinds of limitations, we propose a database management system using blockchain technology that can be used by any software applications. The blockchain database connected to the cloud server can be used to increase the trustfulness of the application. Blockchain has the capability to provide decentralization, immutability and owner-controlled digital assets among software applications. Since users can save their data in a shared transaction repository with tamper-resistant records, it enables related parties to access and control users’ data without the need for a central control system.
https://www.researchgate.net/publication/341456220_A_blockchain-based_database_management_system
----
Concurrency Control in Database Management System
Many popular database management systems implement a multiversion concurrency control algorithm called snapshot isolation rather than providing full serializability based on locking. There are well-known anomalies permitted by snapshot isolation that can lead to violations of data consistency by interleaving transactions that would maintain consistency if run serially. Until now, the only way to prevent these anomalies was to modify the applications by introducing explicit locking or artificial update conflicts, following careful analysis of conflicts between all pairs of transactions. This article describes a modification to the concurrency control algorithm of a database management system that automatically detects and prevents snapshot isolation anomalies at runtime for arbitrary applications, thus providing serializable isolation. The new algorithm preserves the properties that make snapshot isolation attractive, including that readers do not block writers and vice versa. An implementation of the algorithm in a relational DBMS is described, along with a benchmark and performance study, showing that the throughput approaches that of snapshot isolation in most cases. [ABSTRACT FROM AUTHOR]
No comments:
Post a Comment