International Journal of Knowledge Content Development & Technology
[ Article ]
International Journal of Knowledge Content Development & Technology - Vol. 13, No. 1, pp.39-39
ISSN: 2234-0068 (Print) 2287-187X (Online)
Print publication date 09 Apr 2025
Online publication date 09 Apr 2025

Leveraging Ethnobiological Animal Grouping for Database Normalisation

Lubabalo Mbangata* ; Upasana Gitanjali Singh**
*Department of Information Systems, Durban University of Technology, Durban, SOUTH AFRICA (First Author & Corresponding Author) lubabalom1@dut.ac.za
**School of Management, IT, and Governance, University of KwaZulu-Natal, Durban, SOUTH AFRICA (Co-Author) Singhup@ukzn.ac.za

Abstract

Purpose

This research paper explores the intersection of ethnobiology and database normalisation by examining how the traditional categorisation of animals in indigenous knowledge systems aligns with database design principles. Ethnobiology often documents how communities classify animals based on cultural, ecological, and functional attributes. This paper demonstrates how such classifications can illustrate the stages of database normalisation, a process used to organise data efficiently in relational databases. These classifications are based on how South African people understand these groupings.

Methodology

The study begins with unnormalised data, where animal categories are recorded as they exist in their raw, unstructured form, and these animals are selected in no particular order or merit as long as they are living animals and can be categorised. Progressing to the first normal form (1NF), the data is organised into a tabular structure with unique rows and atomic values. In the second normal form (2NF), redundancies are reduced by ensuring that all non-primary attributes depend on the entire primary key. Finally, in the third normal form (3NF), transitive dependencies are eliminated, creating a fully normalised, efficient data model.

Findings

The findings highlight how ethnobiological data naturally follows hierarchical and relational patterns, making it an effective analogy for understanding database normalisation. This approach not only enhances the understanding of database concepts but also underscores the value of indigenous knowledge in illustrating complex technical processes. This study also notes that using this concept might be irrelated to other contexts, hence the advocation for further interdisciplinary exploration between ethnobiology and information science.

Keywords:

Database Normalisation, Ethnobiology, Third Normal Form (3NF), Ethnoscience, Classification

1. Introduction

In this day and age, data forms the central parts of many businesses to make decisions and develop competitive strategies for businesses. In light of that, a serious point in offering a strong database explanation is its level of optimisation which could at the fundamental level be warranted over a well-defined architecture. Ahmedi et al. (2010) reflect on enhancement of the database enterprise given a set of relative schemas and functional dependencies (FDs) holding over them at the input. They mention that rare systems for optimisation of relational databases are already in existence to provision schema enhancement, even though rarely utilised, be it by database experts, or as an education aid at universities.

1.1 Objective and Relevance of Ethnobiological Classification in Normalisation

On the other hand, Anthropology is increasingly interested in understanding different ways of living and recognizes that various worldviews have added new perspectives to its study of cultures. Wilson and Neco (2023) aimed to discover the ontological turn (OT) possible for consideration of the fruitfulness of ethnobiology, which has established mostly (but not entirely) within anthropological traditions knowledge rights by the biological and cognitive sciences and even within those sciences themselves. Their investigation is encouraged by repeated suggestions of David Ludwig (Ludwig, 2018; Ludwig & Weiskopf, 2019; Ludwig & El-Hani, 2020) of taking ethnobiological research into discourse with the OT as an influential form of recent anthropological theorising of the field to motivate the dues made about the living world of animals and plants.

Fasasi (2018) laments that research discoveries propose that learning science and mathematics amongst students and learners in traditional backgrounds has remained tough because of struggles amongst the Western contemporary science and cultural principles. In so many various cultural backgrounds, instructors have remained confronted with the challenge of teaching learners and students Western science depriving the adaptation of the concepts.

In essence, ethnobiology is the study of the relations between human beings, creatures, and their ethos. Through the progression of new proposed methods such as arrangement, ordination, and universal linear models, greater improvement has been made in the ethnobiology field of research (Gaoue et al., 2021). A consideration of these progressive systematic methods will variety ethnobiology extra modest in relating and explaining other concepts and also permit us to enhance our understanding of how people select animals for use, thus influencing the learning of other concepts as we relate them to ethos. Wilson and Neco (2023: 201) argue that ethnobiology has involved a variability of research programs, “some descriptive and extractive in their approach to Indigenous knowledge of the nonhuman biological world, others more normative and political in their views of correspondingly renamed “Traditional Ecological/Environmental Knowledge” and its standing vis-à-vis scientific knowledge”. Ludwing and El-Hani (2020) emphasise that ethnobiologists have been documenting biological knowledge of native societies, they also progressively stress the practical importance of this knowledge for addressing socio-ecological contests in areas like health and food security to labour settings and the safeguarding of biocultural custom. In relating ethnobiology to the database normalisation, it is possible to recognise how human groups affect the populaces of animals in the surroundings, based on their attributes, characteristics and actions of groups and variations of the landscapes (Ferreira Júnior et al., 2022).


2. Understanding Database Normalisation

2.1 Background of Normalisation

The basic process of relational database optimisation begins with the process called database normalisation. Database normalisation is a technique utilised in relational database design to organize data efficiently, reduce redundancy, and minimise data anomalies (Akadal & Satman, 2022). The process involves dividing larger tables (entities) into smaller, more manageable ones and defining relationships between them. Dhabe et al. (2010) define normalisation as the progression of distributing a dataset into minor entities with various attributes by analysing the dependencies amongst them in a conflict-free and well-structured way. Database normalisation, initially formalised by Edgar F. Codd, is fundamental in reducing redundancy in relational databases (Codd, 1970).

Normalisation is a systematic approach to organising data in a relational database. It aims to reduce data redundancy, eliminate data anomalies, and ensure data integrity (Codd, 1970). By applying a series of rules, known as normal forms, database designers can create structures that are more efficient, flexible, and less prone to inconsistencies. The process of normalisation involves decomposing relations with anomalies to produce smaller, well-structured relations. This decomposition is guided by the concept of functional dependencies and the progressive application of normal forms (Kent, 1983). Each normal form builds upon the previous one, addressing specific types of data redundancies and anomalies.

2.2 Levels of Normalisation

Unnormalised Form (UNF) of a database refers to a database design where data is stored in a single table without any normalisation. The database is in a state where data is stored in a database table without any normalisation rules applied. It may contain repeated groups, redundant data, and multiple values in a single field. UNF is essentially a raw data format that is not organised for efficiency or clarity (Connolly & Begg, 2015). This form can lead to data redundancy and anomalies during data operations. It is the starting point for the normalisation process. According to Vertabelo (2020), a database in the First Normal Form (1NF) requires that the values in each column of a table are atomic, meaning each column must contain only one value per row. Additionally, each row must be unique, and there should be no repeating groups or arrays. This means that a database is in the First Normal Form (1NF) to ensure that the data in a table is structured so that: Each column contains atomic (indivisible) values; Each record (row) is unique and identifiable by a primary key; and there are no repeating groups or arrays within a table (Elmasri & Navathe, 2021). This form eliminates redundancy and prepares the data for subsequent normalisation levels.

The Second Normal Form (2NF) builds from the 1NF by ensuring that all non-key attributes are fully functionally dependent on the primary key. That requires that there are no partial dependencies of any column on the primary key, which helps eliminate redundancy (Vertabelo, 2020). This means that a database is in the 2NF to ensure that: Firstly, the database meets the requirements of the 1NF; and all non-primary key attributes are fully functionally dependent on the entire primary key (for tables with a composite key) (Hoffer et al., 2019). This form ensures that it removes partial dependencies, where attributes depend only on part of a composite primary key, improving data integrity.

As for DataCamp (2024), the Third Normal Form (3NF) further refines 2NF by ensuring that all the attributes are not only fully functionally dependent on the primary key but also non-transitively dependent. Non-transitive dependency requires that all the non-key attributes should not depend on other non-key attributes, which helps in reducing data anomalies. This means that a database is in the 3NF to ensure that: Firstly, the database meets the requirements of the 2NF; and there are no transitive dependencies, meaning non-primary key attributes depend only on the primary key and not on other non-primary key attributes (Ramez & Shamkant, 2021). This form eliminates unnecessary dependencies, further reducing redundancy and enhancing data consistency.

2.3 Advantages and Challenges of Normalisation

The application of the database normalisation process bears different benefits and limitations respectively. According to Eessaar (2016), the database normalisation process aids database developers in minimising (not eradicating) data redundancy and thus dodging certain update anomalies that appear because there are combinatorial effects (CEs) amongst the propositions that are recorded in a database. Thus, the process of normalisation centrals to a result model construction that grounds the elimination of duplicate atomic statements and brings semantically correct, consistent, and complete rules in a database.

Reducing redundancy and maintaining data integrity in database normalisation are crucial aspects of database management, especially in complex databases. The advantages of reduced data redundancy include:

  • ➢ Improved Data Consistency: Reducing redundancy ensures that data is stored in a single location, which minimises the risk of inconsistencies. When data is duplicated, any update must be made in multiple places to ensure that data updates are propagated uniformly, minimizing discrepancies across the database (Elmasri & Navathe, 2016).
  • ➢ Enhanced Storage Efficiency: By eliminating redundant data, storage space is used more efficiently for the requirements of the database. This can lead to cost savings, especially in large databases where storage costs can be significant (Connolly & Begg, 2015), and also optimise system performance (Gupta & Sharma, 2021).
  • ➢ Simplified Data Management: With less redundant data, managing and maintaining the database becomes simpler. This can lead to reduced administrative overhead and easier implementation of data governance policies (Silberschatz, Korth, & Sudarshan, 2019). Simplified data management also means faster query processing and more efficient database management, particularly in large-scale systems (Agarwal et al., 2020).

While the advantages of Maintaining Data Integrity include the following:

  • ➢ Accuracy and Reliability: Data integrity ensures that the data is accurate and reliable to enforcing constraints such as primary keys, foreign keys, and unique constraints ensure data is valid and reliable (Ali & Afzal, 2020). This is essential for making informed business decisions and maintaining trust in the database system (Date, 2019).
  • ➢ Compliance and Security: Maintaining data integrity is often a regulatory requirement. Integrity ensures adherence to data governance and regulatory requirements, critical for sensitive applications like finance and healthcare (Patel & Gupta, 2021). Ensuring data integrity also helps in complying with legal standards and protecting sensitive information from corruption or unauthorized access (Hoffer et al., 2016).
  • ➢ Improved Data Quality: High data integrity leads to high data quality, which is crucial for analytics, reporting, and operational processes. Proper normalisation and integrity constraints reduce update, insert, and delete anomalies, improving the usability of the database (Ahmed et al., 2019). Quality data also enhances the overall performance of the database system (Rob & Coronel, 2017).

As much as redundancy reduction and data integrity maintenance in database normalisation are crucial aspects of database management, especially in complex databases, there are challenges in implementing them. These challenges include the following:

  • ➢ Performance Overhead: Normalisation to reduce redundancy can result in numerous smaller tables linked through keys which may lead to performance trade-offs. This means that highly normalised databases can require more complex queries, which can impact performance (Connolly & Begg, 2015). This can lead to increased joint operations during queries, potentially slowing down performance for complex systems (Batra & Verma, 2022).
  • ➢ Complexity in Design: Designing a highly normalised database requires careful planning to ensure the right balance between normalisation and performance. It requires careful planning and a deep understanding of normalisation principles and integrity constraints (Elmasri & Navathe, 2016). The errors in the design phase can lead to inefficiencies or scalability issues (Kumar et al., 2021).
  • ➢ Scalability Issues: As the database grows in size and complexity, maintaining integrity constraints can become resource-intensive, particularly in distributed or cloud-based systems (Shah et al., 2020). Ensuring that integrity constraints are enforced across distributed systems can be particularly difficult (Silberschatz et al., 2019).
  • ➢ Training and Expertise: Ensuring reduced redundancy and maintaining data integrity requires skilled database administrators (DBAs), developers, and advanced tools. This can be a barrier for organisations with limited resources (Hoffer et al., 2016), and it can pose a challenge for organisations with limited technical expertise or resources (Rahman & Sarkar, 2023).
  • Real-Time Applications: In systems requiring real-time data processing, implementing strict integrity checks can introduce latency, potentially affecting system responsiveness (Chopra & Malik, 2021).

Recent studies have emphasised the need for innovative teaching methods in database education. Ishaq et al. (2023) conducted a systematic literature review that distils existing literature on database systems education, discussing the evolution of teaching tools, methods, and curricula. They highlight the importance of engaging learners through modern content and supportive tools to enhance learning outcomes.

Another study by Ma (2024) explores the reform of database courses through the integration of knowledge graphs. This approach addresses the fragmented delivery of knowledge and outdated teaching methods, aiming to improve student comprehension and application of database concepts. These advancements underscore the necessity of continuously updating educational strategies to align with technological progress.

2.4 Insufficient teaching or understanding of database normalisation

Currently, there are different ways of teaching database normalisation in the formal classroom, however, these methods also have their challenges. Traditional methods of teaching database normalisation often rely on lectures, textbooks, and standardised examples, which may not fully engage students or address the complexities of real-world data scenarios. These approaches can lead to challenges such as increased query complexity, performance issues due to multiple table joins, and potential data integrity problems. For instance, Mbangata and Singh (2024) argue that traditional teaching methods focus heavily on the procedural aspects of normalisation, such as the steps to achieve different normal forms, rather than the underlying concepts and rationale. This can lead to rote learning without a deep understanding of why normalisation is necessary. Additionally, the abstract nature of normalisation concepts can make it difficult for students to grasp their practical applications, leading to a superficial understanding of the subject matter. While, the Ethnobiology Letters (2023) claim that students are every so often stunned by the bulk of information offered in database courses. This can result in only a shallow understanding of normalization principles.

2.5 How Ethnobiological Perspectives Can Close the Gap

Many of these instructional methods fail to offer real-world contexts that make the abstract concepts of normalisation extra relatable and understandable for students (Mbangata & Singh, 2024). This can lead the traditional classroom approaches to make students not engage effectively, leading to a lack of motivation and interest in the subject matter. The need for traditional approaches for prior knowledge to understand normalisation, such as programming and algorithm design, can also be a barrier for many students (Ethnobiology Letters, 2023).

On the other hand, the integration of ethnobiological viewpoints into the teaching of database normalisation may address the above-identified gaps by employing culturally relevant and context-specifics. For example, Slonka and Bhatnagar (2024) posit that the study of relationships between people and their biological environments can propose rich, real-world data that can be utilised to illustrate normalisation principles. The authors further suggest that incorporating an ethnobiology viewpoint may reassure interdisciplinary learning, helping students realise the associates between database normalisation and other fields and, by so doing deepen their understanding. By integrating data from ethnobiological studies, educators can generate supplementary attractive and relevant learning involvements, assisting students in understanding the reputation of normalization in organising complex, real-world data.

Utilising ethnobiological case studies and examples can make learning extra attractive and related, snowballing student inspiration and awareness of the database normalisation concepts. Ethnobiology concepts underline rounded understanding, which can assist students in grasping the comprehensive suggestions and applications of normalisation outside the technical procedures.


3. Ethnobiology and Normalisation Process

3.1 Introduction to Ethnobiology and Its Role in Data Classification

Ethnobiology is a scientific interdisciplinary field of study that examines how dynamic relationships between human cultures their societies, and their natural environments observe, recognise, classify, utilise, and manage (Anderson et al., 2021). This discipline emphasises examining the complex relationships of traditional ecological knowledge (TEK), which involves the knowledge, practices, and theories established by indigenous cultural practices and local people over generations. According to Case (2021), TEK frequently offers cataloguing systems that vary from scientific taxonomy to shimmering exclusive cultural, symbolic, and practical associations with biodiversity.

Recent research has emphasised the implication of ethnobiology in sustainability science. For example, Arrivabene et al. (2023) deliberate on how ethnobiological information contributes to maintainable resource management and preservation efforts. Furthermore, the machine learning techniques in ethnobotany have unlocked innovative paths for recording and analysing traditional knowledge. Böhlen and Sujarwo (2020) explore by what means artificial intelligence can improve the documentation of ethnobotanical evidence, safeguarding the conservation and availability of TEK. The character of ethnobiology in considerations of human-nature relations is also evident in recent publications. For instance, The Journal of Ethnobiology continues to publish interdisciplinary studies on the associations between humans and their biological worlds, underwriting the broader schools of thought of ethnobiological classification systems. Concerning cataloguing precisely, ethnobiology puts distinct importance on folk taxonomies - the ways that diverse cultural groups classify and establish breathing things in their environment (Berlin, 2014). This differs from modern scientific taxonomy in several key ways Traditional Classification Characteristics include the following:

  • ➢ Based on practical uses, cultural significance, and local ecological relationships;
  • ➢ Passed down through generations via oral traditions and direct experience;
  • ➢ Often incorporates spiritual and cultural beliefs about organisms;
  • ➢ Usually focuses on locally relevant species rather than global biodiversity;
  • ➢ May group organisms based on functional similarities rather than evolutionary relationships.

According to Hunn and Brown (2019), various native communities categorise plants and animals mostly by their uses (medicinal, food, building materials, etc.) or by their ecological roles. This differs from the scientific organisation system that groups organisms founded on evolutionary associations and communal physical characteristics. The research of these traditional organisation systems is treasured because it conserves significant cultural knowledge around local biodiversity and can expose formerly unidentified properties or uses of organisms (Wolverton et al., 2020). Traditional classification systems can vary significantly between cultures. For instance, some African tribes classify certain plants or animals collectively founded on their spiritual properties, whereas Western or European communities might group marine organisms based on their behaviour patterns and harvesting seasons.

As noted by Nabhan (2022), ethnobiologists working in this area must be sensitive to cultural contexts and intellectual property rights, as traditional classification systems often represent centuries of accumulated knowledge that belongs to specific communities. This knowledge can be particularly valuable for biodiversity conservation and sustainable resource management. ethnobiology provides critical insights into how cultural and traditional knowledge systems classify and interact with the natural world. By bridging traditional wisdom with contemporary scientific approaches, it offers valuable perspectives for biodiversity conservation, sustainable development, and the preservation of cultural heritage.

3.2 Ethnobiology and the Normalisation Process

The juncture of ethnobiology and database design exposes substantial equivalents in knowledge organisation and classification methodologies. According to Garnett et al. (2018), recent studies have established that ethno classification systems frequently display sophisticated hierarchical structures that reflect modern database architecture principles. These traditional knowledge systems have advanced over time to produce resourceful frameworks for understanding ecological associations and managing environmental resources. Ludwig (2019) maintains that ethnobiological cataloguing systems establish noteworthy resemblances to the third normal form (3NF) in database design, where information is organised to minimize redundancy and dependency issues. For instance, traditional ecological knowledge systems frequently distinguish species characteristics into distinct but interconnected categories, comparable to how relational databases allocate attributes across linked tables.

Modern research has revealed that native communities’ biodiversity classification systems can be more granular and context-rich than Western scientific taxonomies. Albuquerque et al. (2020) discovered that ethno-communities often preserve composite classification systems that integrate several layers of relationships, counting ecological, utilitarian, and cultural associations. This multi-dimensional method to an organisation suggests an appreciated understanding of emerging flexible and culturally receptive database schemas. In this context, ethnobiological classification assists as an approach to better understand the database design by representing the principles of grouping founded on relationships and characteristics. Using an ethnobiological method to comprehend the database normalisation process can offer a further instinctive grasp of the hierarchical organisation and dependency reduction essential in controlled data systems.

The incorporation of ethnobiological knowledge systems with contemporary data management has practical applications in biodiversity preservation and sustainable resource management. A study by Pardo-de-Santayana and Macía (2023) establishes in what way indigenous classification systems can boost biodiversity databases by integrating traditional ecological knowledge together with scientific data.


4. Application of Ethnobiological Concepts in Database Structuring: Hierarchy of Animal Classification and its Normalisation Stages

The section presents an ethnobiological classification of animals that can serve as a model to conceptualise normalisation in database design an interesting approach to explaining normalisation using animal classifications. This method leverages familiar concepts from biology to illustrate the increasingly structured organisation of data through various normal forms.

4.1 Application of Ethnobiological Concepts in Database Structuring: Unnormalised Form in Ethnobiology

In database terms, an Unnormalised form (UNF) lacks structure, similar to how an ungrouped list of animals represents no specific hierarchy or relationship. This initial stage, where animals are listed without classification, resembles early ethnobiological observations before meaningful categories are applied. In traditional knowledge systems, organisms may initially be noted in a general sense without a refined taxonomy, mirroring the UNF in databases.

The following is a set of selected animals that are selected in no particular order or merit, and they represent the unnormalized form:

  • Bear; Ostrich; Salmon; Turtle; Frog; Ant; Scorpion; Earthworm; Fluke-worm; Tiger; Peacock; Goldfish; Crocodile; Toad; Cockroach; Spider; Leech; Tapeworm; Whale; Eagle; Guppy; Snake; Newt; Ladybug; Millipede.

In an Unnormalised database, all data points are stored together without organising principles. Similarly, an unclassified list of animals includes diverse species with no distinctions, such as the list comprising Bear, Ostrich, Salmon, Turtle, Frog, etc.

From First Normal Form (1NF) to Third Normal Form (3NF) in Ethnobiology In ethnobiological terms, moving from 1NF to 3NF involves increasing levels of categorisation that parallel the progressive structuring of database tables:

  • 1NF (First Normal Form): At this stage, data is grouped by specific attributes. In ethnobiology, this resembles the first step in categorising animals by broad distinctions, such as vertebrates vs. invertebrates. Just as 1NF requires removing duplicate fields and organising related data, this categorisation makes a clear structural division.

The following are sets of animals from the above set, however, they are grouped based on being either vertebrates or invertebrates, and they represent the first normal form (1NF).

  • ∙ Vertebrates: Bear; Ostrich; Salmon; Turtle; Frog; Tiger; Peacock; Goldfish; Crocodile; Toad; Whale; Eagle; Guppy; Snake; Newt.
  • ∙ Invertebrates: Ant; Scorpion; Earthworm; Fluke-worm; Cockroach; Spider; Leech; Tapeworm; Ladybug; Millipede.

In traditional classifications, organisms might be divided into overarching groups, such as ‘things that fly’ versus ‘things that crawl,’ aligning with 1NF where tables are divided based on fundamental differences (Berlin, 1992).

  • 2NF (Second Normal Form): This stage introduces finer details, grouping animals based on additional criteria like being warm-blooded vs. cold-blooded or jointed vs. non-jointed legs. This step removes partial dependencies in databases, ensuring each subgroup holds relevant data that adds value to the classification without redundancy.

The following are set of animals from the above set (vertebrates or invertebrates) with a new set that grouped based on being warm-blooded, cold-blooded, with joint legs and without legs, and they represent the second normal form (2NF).

  • Vertebrates Animals:
    • Warm-Blooded: Bear; Ostrich; Tiger; Peacock; Whale; Eagle.
    • Cold-Blooded: Salmon; Turtle; Frog; Goldfish; Crocodile; Toad; Guppy; Snake; Newt.
  • Invertebrates Animals:
    • With Joint Legs: Ant; Scorpion; Earthworm; Cockroach; Spider; Ladybug; Millipede.
    • Without Legs: Earthworm; Fluke-worm; Leech; Tapeworm.

Ethnobiological categorisation of animals based on function and ecology, such as predatory vs. herbivorous creatures, reflects 2NF’s aim to establish unique attributes within each subgroup (Atran, 1998).

  • 3NF (Third Normal Form): Finally, 3NF eliminates transitive dependencies, grouping animals by specific attributes that further refine their categorisation, such as mammals, birds, fish, etc. Ethnobiology often follows a similar model, breaking down broad categories into increasingly specialised subgroups, like mammals into bears and tigers, and birds into eagles and ostriches, to provide a fully detailed hierarchy that prevents redundancy.

The following are set of animals from the above set with a new set that grouped based on being mammals, birds, fish, reptiles, amphibians, with 3 pairs of legs, with more than 3 pairs of legs, worm-like and not-worm like, and they represent the second normal form (3NF).

  • Warm-Blooded animals:
    • Mammals: Bear; Tiger; Whale.
    • Birds: Ostrich; Peacock; Eagle.
  • Cold-Blooded animals:
    • Fish: Salmon; Goldfish; Guppy.
    • Reptiles: Turtle; Crocodile; Snake.
    • Amphibians: Frog; Toad; Newt.
  • With Jointed Legs Animals:
    • With 3 Pairs of Legs: Ant; Cockroach; Ladybug.
    • With More Than 3 Pairs of Legs: Scorpion; Spider; Millipede.
  • Without Legs Animals:
    • Worm-Like: Earthworm; Leech.
    • Not-Worm Like: Fluke-worm; Tapeworm.

In 3NF, each organism type can be classified distinctly with no overlapping attributes, similar to ethnobiology’s grouping by species-specific traits (Posey, 1999).

Ethnobiology’s layered classifications reflect the dependencies in database normalisation: the relationships between data are analogous to the relationships between species. For instance, when an animal is grouped as a vertebrate and further as a mammal, these categories have dependencies, just as database fields do. Ethnobiological groupings allow us to see dependencies visually and conceptually, which aids in understanding the goals of normalisation in databases.


5. Related Studies in Ethnoscience and Database Normalisation

Traditional knowledge systems (TKS) in ethnoscience involve the growing body of knowledge, practices, and beliefs established by native and local communities over generations. According to Hanazaki (2024), these systems are adaptive and self-motivated, sparkly the complex associations between living beings and their environment. Meaning, they frequently offer perceptions into ethnoscience, a field that combines cultural practices with observed explanations to enlighten natural phenomena. The incorporation of TKS into modern data classification frameworks is progressively recognised as a vital area of research, presenting pathways to develop scientific research, encourage sustainability, and honour cultural heritage.

Ethnoscience is grounded in the consideration of how various cultures categorise and relate to their environments. Berlin (2014) noted that the main aim of ethnoscience is to document and analyse native taxonomies, which frequently replicate a profound consideration of biodiversity. As for Leonti (2024), the ethnoscience systems are not stationary; they are progressive by integrating novel information and inventions. The ethnoscience systems are integrally classificatory, banking on significant observation and categorisation methods that align with cultural worldviews and philological structures. For instance, Bicker et al. (2003) studied ethnobotanical classification systems between Southeast Asian communities, signifying how native plant taxonomies replicate ecological associations that are every so often ignored by Western scientific frameworks. In the same way, Agrawal (1995) highlighted the role of indigenous knowledge in categorising natural resources, emphasising that such systems are regularly more adaptive to local ecological contexts. A distinguished research by Posey (2003) explored the Kayapó people’s environmental knowledge in Brazil, figure-hugging complex cataloguing structures for flora and fauna that could apprise biodiversity preservation. Moreover, the integration of TKS into geospatial data cataloguing was examined by Aswani and Lauer (2006), who established how indigenous marine spatial planning practices could improve marine resource management strategies.

Traditional database education often relies on lectures, textbooks, and standardized examples, which may not fully engage students or address the complexities of real-world data scenarios. These methods can lead to challenges such as increased query complexity, performance issues due to multiple table joins, and potential data integrity problems. In contrast, the ethnobiological approach leverages culturally relevant and context-specific data to illustrate database concepts. By using familiar concepts from ethnobiology, such as animal classification systems, this approach provides a more intuitive and engaging way to understand database normalisation. This method encourages interdisciplinary learning, helping students see the connections between database normalisation and other fields, thereby deepening their understanding.

Studies have shown that incorporating ethnobiological perspectives into teaching can enhance student retention and engagement. For example, Mbaegbu and Osuafor (2023) found that students taught using ethnobiological instructional approaches had higher retention rates and found the material more relevant and interesting compared to traditional lecture methods. This approach not only makes learning more relatable but also fosters a deeper appreciation for the cultural and ecological contexts of data.


6. Discussion

The application of ethnobiological concepts, as demonstrated in this example, bridges traditional knowledge systems (TKS) with database structuring. Using culturally relevant classifications such as ethnobiological taxonomies provides a practical, intuitive, and sustainable method for structuring databases, particularly in contexts involving complex datasets or diverse cultural inputs. Moreover, the use of culturally relevant classifications, such as those derived from ethnobiology, to inform database structuring is significant for several reasons:

Cultural Relevance in Data Classification and Contextual Sensitivity: Ethnobiological classification systems, rooted in traditional knowledge, provide insights into how communities interact with and perceive their environment (Wilson & Neco, 2023). Culturally relevant classifications like ethnobiological taxonomies reflect the way local communities understand and interact with their environment. These classifications capture intricate relationships and dependencies among elements, much like database normalisation processes aim to represent relationships between data tables. By leveraging such systems, database designs can achieve both functional accuracy and cultural inclusivity, making them valuable tools for interdisciplinary projects. Berlin (2014) highlights that native taxonomies encapsulate a deep understanding of biodiversity, offering insights often overlooked in Western scientific frameworks. Incorporating these systems into databases ensures cultural sensitivity and better alignment with local practices, fostering user engagement and adoption.

Improved Data Representation: Traditional classification systems often capture multidimensional relationships among entities, such as ecological, utilitarian, and cultural attributes (Albuquerque et al., 2020). Applying these frameworks to database structures allows for richer, more context-aware data representation. Ethnobiological knowledge, often passed down orally, risking being lost. Structuring databases using these frameworks aids in the systematic preservation of cultural and ecological knowledge, which can be vital for biodiversity conservation. The parallels between ethnobiological classifications and normalisation stages, as shown in this study, provide intuitive ways to conceptualise data organisation. This interdisciplinary approach aids database designers in visualising dependencies and ensuring minimal redundancy (Garnett et al., 2018).

However, it’s worth noting that this analogy simplifies the normalisation process. In a real database scenario, we would be dealing with tables, attributes, and functional dependencies, which are not explicitly represented here. As for Nabhan (2022), despite the benefits of using ethnobiology, integrating cultural classifications requires careful attention to intellectual property rights, contextual accuracy, and potential conflicts with scientific taxonomies.

6.1 Limitations of Conceptualising Normalisation through Ethnobiology

The primary limitation of this study lies in its conceptual and descriptive nature, as it does not include empirical validation. While the analogy between ethnobiological classification and database normalization forms is well-articulated, the study lacks concrete evidence demonstrating its effectiveness in teaching database concepts. This absence of empirical data means that the proposed approach remains theoretical, without practical validation through case studies or experimental research. To address this gap, future research should focus on testing the concepts presented in this study. Conducting experiments or case studies involving students or practitioners would provide valuable insights into how effectively this analogy aids in understanding database normalization. Such empirical validation would not only enhance the credibility of the proposed methodology but also offer practical recommendations for educators. By demonstrating real-world applications and outcomes, future research can bridge the gap between theory and practice. Therefore, it is recommended that subsequent studies prioritize empirical testing to substantiate the conceptual framework presented here.

The analogy between ethnobiological classification and database normalization, while insightful, does not fully capture the complexity of database relationships. In relational databases, relationships between tables are often intricate, involving various types of joins, foreign keys, and constraints that ensure data integrity and consistency. These relationships can represent many-to-many, one-to-many, or one-to-one associations, each with its own set of rules and implications for data retrieval and manipulation. Ethnobiological classifications, on the other hand, tend to be more straightforward, focusing on hierarchical and categorical groupings without the same level of interdependency and interaction. As a result, while the analogy helps in understanding basic normalization concepts, it falls short of illustrating the full spectrum of relational database complexities, such as referential integrity, cascading updates, and complex query optimization.

Another limitation of the ethnobiological analogy is its lack of representation of keys, attributes, and functional dependencies, which are fundamental to database normalization. In a relational database, primary keys uniquely identify records, and foreign keys establish relationships between tables. Attributes represent the data fields within a table, and functional dependencies define how attributes relate to one another, guiding the normalization process to eliminate redundancy and ensure data integrity. The ethnobiological classification system, while useful for illustrating hierarchical groupings, does not inherently include these critical database elements. Without a clear representation of keys and dependencies, the analogy may oversimplify the normalization process, potentially leading to misunderstandings about how data integrity and relationships are maintained in a normalized database.

The use of ethnobiological classifications to explain database normalization also introduces the potential for debate from a biological standpoint. Ethnobiological classifications are often based on cultural, ecological, and functional attributes that may not align with scientific taxonomy. Different cultures may classify the same organisms differently based on their unique perspectives and uses, leading to variations that can be subjective and context-dependent. This variability can make it challenging to establish a universally accepted classification system, potentially undermining the analogy’s effectiveness in a diverse educational setting. Additionally, some classifications may be oversimplified or not scientifically accurate, which could lead to confusion or misinterpretation when applied to the structured and precise nature of database normalization. Therefore, while the analogy provides a novel approach to teaching, it must be used with caution and supplemented with more rigorous, scientifically grounded examples.

6.2 Benefits of Conceptualising Normalisation through Ethnobiology

Using familiar concepts to explain technical topics is a powerful educational strategy that enhances comprehension and retention. Analogies and metaphors are particularly effective because they bridge the gap between the known and the unknown, making abstract or complex ideas more concrete and relatable. For instance, comparing a database to a library’s filing system helps learners understand how data is organised, stored, and retrieved. This method leverages existing knowledge, allowing learners to build on what they already know, which is crucial for grasping new concepts (AlgoCademy, 2023). By relating technical topics to everyday experiences, educators can demystify complex subjects, making them more accessible and less intimidating for students. Moreover, using familiar concepts can also foster engagement and interest. When learners see the relevance of a technical topic to their own lives or experiences, they are more likely to be motivated and invested in the learning process. This approach not only aids in understanding but also in retaining information, as it creates meaningful connections in the learner’s mind. For example, explaining data flow in a network using the metaphor of water flowing through pipes can help students visualise and understand the concept more clearly (AlgoCademy, 2023). This strategy is particularly effective in diverse educational settings where students may have varying levels of prior knowledge and experience.

The principle of increasing organization and specificity is fundamental in both educational methodologies and data management. In the context of database normalization, this principle is exemplified by the progressive structuring of data through various normal forms. Starting from an unnormalized form, where data is raw and unstructured, the process moves through stages that increasingly organize and refine the data. Each stage, from the first normal form (1NF) to the third normal form (3NF), introduces more specific rules and structures that reduce redundancy and improve data integrity (Renze, 2023). This hierarchical approach mirrors effective organizational strategies in other fields, where tasks and information are systematically categorized and refined to enhance clarity and efficiency.

In educational settings, illustrating this principle helps students understand the importance of structure and specificity in managing complex information. By breaking down a complex topic into smaller, more manageable parts, educators can guide students through a logical progression of ideas. This method not only aids comprehension but also helps students develop critical thinking and problem-solving skills. For example, teaching database normalization by progressively introducing more specific rules and structures helps students grasp the underlying principles and their practical applications (CHRMP, 2025). This approach aligns with broader educational goals of fostering deep understanding and the ability to apply knowledge in various contexts.

Logical grouping and subdivision of data are essential techniques in data management that enhance clarity, efficiency, and usability. In database design, these techniques are employed to organize data into tables and relationships that minimize redundancy and ensure data integrity. For instance, in the first normal form (1NF), data is organized into tables with atomic values, eliminating repeating groups and ensuring that each column contains only one type of data. This initial grouping sets the foundation for further refinement and subdivision in subsequent normal forms (Renze, 2023). By logically grouping related data, databases can efficiently store and retrieve information, supporting accurate and reliable data analysis.

In educational contexts, demonstrating how data can be logically grouped and subdivided helps students understand the principles of data organization and management. This understanding is crucial for developing skills in database design, data analysis, and information systems management. By using real-world examples and case studies, educators can show how logical grouping and subdivision are applied in practice, making abstract concepts more tangible and relatable. For example, grouping data by categories such as customer information, product details, and transaction records in a retail database illustrates how logical organization supports efficient data retrieval and analysis (BYJU’S, 2025). This approach not only enhances students’ technical skills but also prepares them for practical challenges in the field of data management.


7. Conclusion

This paper has shown that database normalisation is a systematic process aimed at reducing data redundancy, ensuring data integrity, and organising relational databases into more efficient and meaningful structures. It has leveraged the levels of normalisation (unnormalised form, 1NF, 2NF, and 3NF) to illustrate progressively refined data structuring. This study also revealed that ethnobiology studies the relationships between humans, cultures, and the natural environment, focusing on the classification systems that reflect cultural and ecological interdependencies. These classification systems used can be comparable to database normalisation, as they organise entities based on distinct but interconnected attributes, eradicating redundancies and improving functional dependencies.

The integration of ethnobiological classification systems with database normalization principles presents a novel approach that differentiates this study from existing research. Unlike traditional methods that rely solely on technical frameworks, this interdisciplinary approach leverages the rich, culturally embedded knowledge systems of ethnobiology to illustrate complex database concepts. This unique combination not only enhances the understanding of database normalization but also underscores the value of indigenous knowledge in modern data management.

The development of familiar cultural and ecological framework concepts like ethnobiological classifications helps simplify the understanding of database normalisation concepts. The use of this approach aligns with technical database principles with intuitive human categorisation, enriching data structuring and improving the contextual relevance of databases. The ethnobiology’s hierarchical classification like the categorisation of animals into vertebrates and invertebrates and further into subcategories is parallel to normalisation stages that highlight dependencies and reducing overlaps.

Integrating ethnobiological principles into normalisation exemplifies the value of interdisciplinary approaches in problem-solving. This analogy simplifies technical concepts and fosters intuitive learning, making the process accessible to diverse audiences. It also bridges the gap between cultural knowledge and technical database management, promoting sustainable and inclusive practices in data handling.

The practical applications of this approach are manifold. By using familiar ethnobiological classifications, educators can make database normalization concepts more relatable and engaging for students, potentially improving retention and comprehension. Additionally, this method promotes cultural sensitivity in data handling, ensuring that database designs are inclusive and contextually relevant.

Furthermore, this research bridges the gap between ethnobiology and information science, opening new avenues for interdisciplinary exploration. The insights gained from this study can inform future research and applications, particularly in areas where cultural and ecological knowledge is paramount. In conclusion, the innovative integration of ethnobiological principles with database normalisation not only advances the field of database management but also highlights the importance of preserving and utilising traditional knowledge systems in contemporary scientific practices.

References

  • Agarwal, S., Gupta, R., & Singh, P. (2020). Optimizing database redundancy for large-scale systems. International Journal of Database Management Systems, 12(4), 23-35.
  • Agrawal, A. (1995). Dismantling the divide between indigenous and scientific knowledge. Development and Change, 26(3), 413-439. [https://doi.org/10.1111/j.1467-7660.1995.tb00560.x]
  • Ahmed, H., Kumar, S., & Verma, R. (2019). Impact of normalization on database anomalies. Journal of Information Systems, 14(2), 45-59.
  • Ahmedi, L., Jajaga, E., & MACEDONIA, F. (2010, February). Normalization of relations and ontologies. In Proceedings The 9th WSEAS International Conference on ARTIFICIAL INTELLIGENCE, KNOWLEDGE ENGINEERING and DATA BASES (AIKED’10), (pp. 419-424) Cambridge, UK.
  • Akadal, E., & Satman, M. H. (2022). A Novel Automatic Relational Database Normalization Method. Acta Informatica Pragensia, 11(3), 293-308. [https://doi.org/10.18267/j.aip.193]
  • Albuquerque, U. P., Ludwig, D., Feitosa, I. S., De Moura, J. M. B., Gonçalves, P. H. S., Da Silva, R. H., Da Silva, T. C., Gonçalves-Souza, T., & Ferreira Junior, W. S. (2021). Integrating traditional ecological knowledge into academic research at local and global scales. Regional Environmental Change, 21, 1-11. [https://doi.org/10.1007/s10113-021-01774-2]
  • AlgoCademy. (2023). How to Explain Complex Technical Concepts Simply: A Comprehensive Guide. Available at: https://algocademy.com/blog/how-to-explain-complex-technical-concepts-simply-a-comprehensive-guide/, [Accessed 10 Mar. 2025].
  • Ali, M., & Afzal, M. (2020). Data integrity in relational database systems. Computing and Informatics Journal, 39(3), 123-138.
  • Anderson, E. N., Pearsall, D. M., Hunn, E. S., & Turner, N. J. (2021). Ethnobiology: Evolution of a discipline. John Wiley & Sons, New York.
  • Arrivabene, A., Lasic, L., Blanco, J., Carrière, S. M., Ladio, A., Caillon, S., Porcher, V., & Teixidor-Toneu, I. (2024). Ethnobiology’s Contributions to Sustainability Science. Journal of Ethnobiology, 44(3), 207-220. [https://doi.org/10.1177/02780771241261221]
  • Aswani, S., & Lauer, M. (2006). Incorporating fishermen’s local knowledge and behavior into geographical information systems (GIS) for designing marine protected areas in Oceania. (1), 81-102. [https://doi.org/10.17730/humo.65.1.4y2q0vhe4l30n0uj]
  • Batra, R., & Verma, A. (2022). Challenges in implementing normalization for big data systems. Big Data and Society, 8(1), 101-120.
  • Berlin, B. (1992). Ethnobiological Classification: Principles of Categorizing Organisms. Princeton University Press. [https://doi.org/10.1515/9781400862597]
  • Berlin, B. (2014). Ethnobiological classification: Principles of categorization of plants and animals in traditional societies (Vol. 185). Princeton University Press.
  • Bicker, A., Ellen, R., & Parkes, P. (2003). Indigenous environmental knowledge and its transformations: Critical anthropological perspectives. Routledge. [https://doi.org/10.4324/9780203479568]
  • Böhlen, M., & Sujarwo, W. (2020, October). Machine learning in ethnobotany. In 2020 IEEE International Conference on Systems, Man, and Cybernetics (SMC), (pp. 108-113). IEEE [https://doi.org/10.1109/SMC42975.2020.9283069]
  • BYJU’S. (2025). Grouping Data - Definition, Frequency Distribution Table and Example. Available at: https://byjus.com/maths/grouping-data/, [Accessed 10 Mar. 2025].
  • Chopra, R., & Malik, S. (2021). Real-time database challenges: A review. Journal of Advanced Computing Systems, 10(3), 55-72.
  • CHRMP. (2025). 16 Key Principles of Organisation: Comprehensive Guide 2025. Available at: https://www.chrmp.com/principles-of-organisation/, [Accessed 10 Mar. 2025].
  • Codd, E. F. (1970). A Relational Model of Data for Large Shared Data Banks. Communications of the ACM, 13(6), 377-387. [https://doi.org/10.1145/362384.362685]
  • Connolly, T., & Begg, C. (2015). Database Systems: A Practical Approach to Design, Implementation, and Management. 7th ed. Pearson.
  • DataCamp. (2024). What is Third Normal Form (3NF)? A Beginner-Friendly Guide. Retrieved from https://www.datacamp.com/tutorial/third-normal-form, [Accessed 02 Jan. 2025].
  • Date, C. J. (2019). An Introduction to Database Systems. 8th ed. Addison-Wesley.
  • Dhabe, P. S., Patwardhan, M. S., Deshpande, A. A., Dhore, M. L., Barbadekar, B. V., & Abhyankar, H. K. (2010). Articulated entity relationship (AER) diagram for complete automation of relational database normalization. International journal of database management systems, 2(2), 84-100. [https://doi.org/10.5121/ijdms.2010.2206]
  • Eessaar, E. (2016). The Database Normalization Theory and the Theory of Normalized Systems: Finding a Common Ground. Baltic Journal of Modern Computing, 4(1).
  • Elmasri, R., & Navathe, S. (2016). Fundamentals of Database Systems. 6th ed. Pearson.
  • Elmasri, R., & Navathe, S. (2021). Fundamentals of Database Systems. 7th ed. Pearson.
  • Ethnobiology Letters. (2023). Society of Ethnobiology. Available at: https://ethnobiology.org/publications/ethnobiology-letters, . [Accessed 01 Feb. 2025].
  • Ferreira Júnior, W. S., Medeiros, P. M., & Albuquerque, U. P. (2022). Evolutionary ethnobiology. Ethnobiology and Conservation, 11. [https://doi.org/10.15451/ec2022-04-11.10-1-8]
  • Gaoue, O. G., Moutouama, J. K., Coe, M. A., Bond, M. O., Green, E., Sero, N. B., Bezeng, B. S., & Yessoufou, K. (2021). Methodological advances for hypothesis‐driven ethnobiology. Biological Reviews, 96(5), 2281-2303. [https://doi.org/10.1111/brv.12752]
  • Garnett, S. T., Burgess, N. D., Fa, J. E., Fernández-Llamazares, Á., Molnár, Z., Robinson, C. J., Watson, J. E., Zander, K. K., Austin, B., Brondizio, E. S., & Collier, N. F. (2018). A spatial overview of the global importance of Indigenous lands for conservation. Nature Sustainability, 1(7), 369-374. [https://doi.org/10.1038/s41893-018-0100-6]
  • Gupta, A., & Sharma, P. (2021). Reducing redundancy in modern databases: Techniques and outcomes. Journal of Database Research, 13(1), 78-90.
  • Hanazaki, N. (2024). Local and traditional knowledge systems, resistance, and socioenvironmental justice. Journal of Ethnobiology and Ethnomedicine, 20(1), 5. [https://doi.org/10.1186/s13002-023-00641-0]
  • Hoffer, J. A., Ramesh, V., & Topi, H. (2016). Modern Database Management. 12th ed. Pearson.
  • Hoffer, J., Venkataraman, R., & Topi, H. (2019). Modern Database Management. 13th ed. Pearson.
  • Hunn, E. S., & Brown, C. H. (2011). Linguistic ethnobiology. Ethnobiology, 319-333. [https://doi.org/10.1002/9781118015872.ch19]
  • Ishaq, M., Abid, A., Farooq, M. S., Manzoor, M. F., Farooq, U., Abid, K., & Abu Helou, M. (2023). Advances in database systems education: Methods, tools, curricula, and way forward. Education and Information Technologies, 28, 2681-2725. [https://doi.org/10.1007/s10639-022-11293-0]
  • Kumar, N., Ahmed, A., & Sharma, V. (2021). Balancing normalization and performance in database design. Database Management Review, 16(4), 34-50.
  • Leonti, M. (2024). Are we romanticizing traditional knowledge? A plea for more experimental studies in ethnobiology. Journal of Ethnobiology and Ethnomedicine, 20(1), 56. [https://doi.org/10.1186/s13002-024-00697-6]
  • Ludwig, D. (2018). Does Cognition Still Matter in Ethnobiology? Ethnobiology Letters, 9(2), 269-275. [https://doi.org/10.14237/ebl.9.2.2018.1350]
  • Ludwig, D. (2019). Indigenous and scientific kinds. The British Journal for the Philosophy of Science, 70(4), 1017-1037.
  • Ludwig, D., & C. N. El-Hani. (2020). Philosophy of Ethnobiology: Understanding Knowledge Integration and Its Limitations. Journal of Ethnobiology, 40(1), 3-20. [https://doi.org/10.2993/0278-0771-40.1.3]
  • Ludwig, D., & D. A. Weiskopf. (2019). Ethnoontology: Ways of World-Building across Cultures. Philosophy Compass, 14(9). [https://doi.org/10.1111/phc3.12621]
  • Ma, J. (2024). Reforming Database Courses Based on Knowledge Graphs. International Journal of New Developments in Education, 6(6), 203-208. [https://doi.org/10.25236/IJNDE.2024.060631]
  • Mbaegbu, S. C., & Osuafor, M. A. (2023). Ethnobiology Instructional Approach: Effect on Secondary School Students Retention of Biology Concepts in Onitsha Education Zone. International Journal of Trend in Scientific Research and Development, 7(1), 237-245.
  • Mbangata, L., & Singh, U. G. (2024). A Literature Review on Teaching and Learning Database Normalisation: Approaches and Tools. In International Congress on Information and Communication Technology (pp. 1-11). Springer, Singapore. [https://doi.org/10.1007/978-981-97-3302-6_1]
  • Nabhan, G. P. (2022). Ethnobiology for the future: Linking cultural and ecological diversity. Journal of Ethnobiology, 42(1), 1-15.
  • Nabhan, G. P. (eds). (2016). Ethnobiology for the future: linking cultural and ecological diversity. University of Arizona Press.
  • Pardo-de-Santayana, M., & Macía, M. J. (2023). Traditional ecological knowledge systems as models for biodiversity databases. Journal of Ethnobiology, 43(1), 56-71.
  • Patel, N., & Gupta, S. (2021). Ensuring data integrity in regulatory-compliant systems. Journal of Governance in IT, 9(2), 102-117.
  • Posey, D. A. (1999). Cultural and Spiritual Values of Biodiversity. United Nations Environment Programme. [https://doi.org/10.3362/9781780445434.000]
  • Posey, D. A. (2003). Kayapó ethnoecology and culture (Vol. 6). Routledge. [https://doi.org/10.4324/9780203220191]
  • Rahman, A., & Sarkar, R. (2023). Training gaps in database normalization practices. Journal of IT Education and Practice, 11(2), 89-98.
  • Ramez, E., & Shamkant, N. (2021). Conceptual Database Design and Database Management. 7th ed. Pearson.
  • Renze, M. (2023). Nominal, Ordinal, Interval, and Ratio Data. Available at: https://matthewrenze.com/articles/the-four-subtypes-of-data-in-data-science/, [Accessed 10 Mar. 2025].
  • Rob, P., & Coronel, C. (2017). Database Systems: Design, Implementation, & Management. 12th ed. Cengage Learning.
  • Shah, P., Ali, H., & Verma, K. (2020). Challenges in maintaining data integrity in distributed systems. Distributed Computing Journal, 18(3), 64-85.
  • Silberschatz, A., Korth, H. F., & Sudarshan, S. (2010). Database System Concepts. 6th ed. McGraw-Hill.
  • Silberschatz, A., Korth, H. F., & Sudarshan, S. (2019). Database System Concepts. 7th ed. McGraw-Hill.
  • Slonka, K. J., & Bhatnagar, N. (2024). Teaching database normalization: do prerequisites matter?. Issues in Information Systems, 25(1), 128-135.
  • Vertabelo. (2020). Normalization in Relational Databases: First Normal Form (1NF), Second Normal Form (2NF), and Third Normal Form (3NF). Retrieved from https://vertabelo.com/blog/normalization-1nf-2nf-3nf/, [Accessed 02 Jan. 2025]
  • Wilson, R. A., & Neco, L. C. (2023). Ethnobiology, the ontological turn, and human sociality. Journal of Ethnobiology, 43(3), 198-207. [https://doi.org/10.1177/02780771231194781]
  • Wolverton, S., Nolan, J. M., & Ahmed, W. (2014). Ethnobiology, political ecology, and conservation. Journal of Ethnobiology, 34(2), 125-152. [https://doi.org/10.2993/0278-0771-34.2.125]
About the authors

Lubabalo Mbangata is a lecturer at the Department of Information Systems and programme coordinator for the Diploma in ICT in Business Analysis (DIIBA1) at the Durban University of Technology, South Africa (Durban campus). He holds a master’s degree in Information and Communications Technology from the Durban University of Technology and a bachelor of technology degree in IT (Business Applications) from Tshwane University of Technology, South Africa (Soshanguve campus). He is currently registered for PhD in Information Systems & Technology in the School of Management, IT and Governance at the University of KwaZulu-Natal, South Africa (Westville campus). His research interests include database normalisation, ethnoscience concepts, ICT adoption and teaching pedagogies.

Upasana Gitanjali Singh is an Academic Leader and an Associate Professor of Information Systems & Technology in the School of Management, IT and Governance at the University of KwaZulu-Natal, South Africa (Westville campus). With over 15 years of teaching experience, she specializes in IT-related subjects such as e-commerce, IT Consulting, IT Strategy, Programming, and Research Methodology. Her research interests focus on Educational Technologies, and she has led numerous international projects on Digital Teaching, Learning, and Assessment. Professor Singh’s research profile includes four edited books, 24 journal papers, 12 book chapters, and 26 peer-reviewed conference papers. She has served as a keynote speaker at over 25 international conferences and chairs the International Conference on Digital Teaching, Learning, and Assessment (digiTAL2K). Committed to advancing teaching practices, she completed a Fellowship in Teaching Advancement in Universities (TAU) in 2019 and has supported over 1500 academics in adopting digital teaching methods. During the pandemic, Professor Singh developed three conceptual models related to the transition to online learning for academics, students, and females. She secured research grants, including a substantial one from the National Research Foundation, focusing on Digital Capital at South African Higher Education Institutions. Her recent publications contribute to the scholarship on Digital Teaching, Learning, and Assessment, addressing online teaching, quality assurance, and the future of digital teaching. Nominated for various awards, she represents UKZN at the National University Teaching Awards in 2024.