Wednesday, 31 May 2017

Data Modelling

Data modeling in software engineering is the process of creating a data model for an information system by applying certain formal techniques.
Image result for data modelling icon
Data modeling is a process used to define and analyze data requirements needed to support the business processes within the scope of corresponding information systems in organizations. Therefore, the process of data modeling involves professional data modelers working closely with business stakeholders, as well as potential users of the information system.
There are three different types of data models produced while progressing from requirements to the actual database to be used for the information system. The data requirements are initially recorded as a conceptual data model which is essentially a set of technology independent specifications about the data and is used to discuss initial requirements with the business stakeholders. The conceptual model is then translated into a logical data model, which documents structures of the data that can be implemented in databases. Implementation of one conceptual data model may require multiple logical data models. The last step in data modeling is transforming the logical data model to a physical data model that organizes the data into tables, and accounts for access, performance and storage details. Data modeling defines not just data elements, but also their structures and the relationships between them.

Data modeling techniques and methodologies are used to model data in a standard, consistent, predictable manner in order to manage it as a resource. The use of data modeling standards is strongly recommended for all projects requiring a standard means of defining and analyzing data within an organization, e.g., using data modeling:
  • to assist business analysts, programmers, testers, manual writers, IT package selectors, engineers, managers, related organizations and clients to understand and use an agreed semi-formal model the concepts of the organization and how they relate to one another
  • to manage data as a resource
  • for the integration of information systems
  • for designing databases/data warehouses (aka data repositories)

Data modeling may be performed during various types of projects and in multiple phases of projects. Data models are progressive; there is no such thing as the final data model for a business or application. Instead a data model should be considered a living document that will change in response to a changing business. The data models should ideally be stored in a repository so that they can be retrieved, expanded, and edited over time. Whitten et al. (2004) determined two types of data modeling:
  • Strategic data modeling: This is part of the creation of an information systems strategy, which defines an overall vision and architecture for information systems is defined. Information engineering is a methodology that embraces this approach.
  • Data modeling during systems analysis: In systems analysis logical data models are created as part of the development of new databases.
Data modeling is also used as a technique for detailing business requirements for specific databases. It is sometimes called database modeling because a data model is eventually implemented in a database.
Data modeling topics
Data models
Main article: Data model
How data models deliver benefit.
Data models provide a structure for data used within information systems by providing specific definition and format. If a data model is used consistently across systems then compatibility of data can be achieved. If the same data structures are used to store and access data then different applications can share data seamlessly. The results of this are indicated in the diagram. However, systems and interfaces are often expensive to build, operate, and maintain. They may also constrain the business rather than support it. This may occur when the quality of the data models implemented in systems and interfaces is poor.
  • Business rules, specific to how things are done in a particular place, are often fixed in the structure of a data model. This means that small changes in the way business is conducted lead to large changes in computer systems and interfaces. So, business rules need to be implemented in a flexible way that does not result in complicated dependencies, rather the data model should be flexible enough so that changes in the business can be implemented within the data model in a relatively quick and efficient way.
  • Entity types are often not identified, or are identified incorrectly. This can lead to replication of data, data structure and functionality, together with the attendant costs of that duplication in development and maintenance.Therefore, data definitions should be made as explicit and easy to understand as possible to minimize misinterpretation and duplication.
  • Data models for different systems are arbitrarily different. The result of this is that complex interfaces are required between systems that share data. These interfaces can account for between 25-70% of the cost of current systems. Required interfaces should be considered inherently while designing a data model, as a data model on its own would not be usable without interfaces within different systems.
  • Data cannot be shared electronically with customers and suppliers, because the structure and meaning of data has not been standardised. To obtain optimal value from an implemented data model, it is very important to define standards that will ensure that data models will both meet business needs and be consistent
Conceptual, logical and physical schemas
In 1975 ANSI described three kinds of data-model instance:
  • Conceptual schema: describes the semantics of a domain (the scope of the model). For example, it may be a model of the interest area of an organization or of an industry. This consists of entity classes, representing kinds of things of significance in the domain, and relationships assertions about associations between pairs of entity classes. A conceptual schema specifies the kinds of facts or propositions that can be expressed using the model. In that sense, it defines the allowed expressions in an artificial “language” with a scope that is limited by the scope of the model. Simply described, a conceptual schema is the first step in organizing the data requirements.
  • Logical schema: describes the structure of some domain of information. This consists of descriptions of (for example) tables, columns, object-oriented classes, and XML tags. The logical schema and conceptual schema are sometimes implemented as one and the same.
  • Physical schema: describes the physical means used to store data. This is concerned with partitions, CPUs, tablespaces, and the like.
According to ANSI, this approach allows the three perspectives to be relatively independent of each other. Storage technology can change without affecting either the logical or the conceptual schema. The table/column structure can change without (necessarily) affecting the conceptual schema. In each case, of course, the structures must remain consistent across all schemas of the same data model.

Data modeling process
Data modeling in the context of Business Process Integration.
In the context of business process integration (see figure), data modeling complements business process modeling, and ultimately results in database generation.
The process of designing a database involves producing the previously described three types of schemas – conceptual, logical, and physical. The database design documented in these schemas are converted through a Data Definition Language, which can then be used to generate a database. A fully attributed data model contains detailed attributes (descriptions) for every entity within it. The term “database design” can describe many different parts of the design of an overall database system. Principally, and most correctly, it can be thought of as the logical design of the base data structures used to store the data. In the relational model these are the tables and views. In an object database the entities and relationships map directly to object classes and named relationships. However, the term “database design” could also be used to apply to the overall process of designing, not just the base data structures, but also the forms and queries used as part of the overall database application within the Database Management System or DBMS.
In the process, system interfaces account for 25% to 70% of the development and support costs of current systems. The primary reason for this cost is that these systems do not share a common data model. If data models are developed on a system by system basis, then not only is the same analysis repeated in overlapping areas, but further analysis must be performed to create the interfaces between them. Most systems within an organization contain the same basic data, redeveloped for a specific purpose. Therefore, an efficiently designed basic data model can minimize rework with minimal modifications for the purposes of different systems within the organization

Modeling methodologies
Data models represent information areas of interest. While there are many ways to create data models, according to Len Silverston (1997) only two modeling methodologies stand out, top-down and bottom-up:
  • Bottom-up models or View Integration models are often the result of a reengineering effort. They usually start with existing data structures forms, fields on application screens, or reports. These models are usually physical, application-specific, and incomplete from an enterprise perspective. They may not promote data sharing, especially if they are built without reference to other parts of the organization.
  • Top-down logical data models, on the other hand, are created in an abstract way by getting information from people who know the subject area. A system may not implement all the entities in a logical model, but the model serves as a reference point or template.
Sometimes models are created in a mixture of the two methods: by considering the data needs and structure of an application and by consistently referencing a subject-area model. Unfortunately, in many environments the distinction between a logical data model and a physical data model is blurred. In addition, some CASE tools don’t make a distinction between logical and physical data models.

Contact Info:
Mindmajix Technologies Inc
USA : +1-201 3780 518
IND : +91 9246333245
Email: info@mindmajix.com

Team Foundation Server

Team Foundation Server (commonly abbreviated to TFS) is a Microsoft product that provides source code management (either with Team Foundation Version Control or Git), reporting, requirements management, project management (for both agile software development and waterfall teams), automated builds, lab management, testing and release management

capabilities.

It covers the entire application life cycle, and enables DevOps capabilities. TFS can be used as a back-end to numerous integrated development environments (IDEs) but is tailored for Microsoft Visual Studio and Eclipse on all platforms

Online vs On-premises

Team Foundation Server is available in two different forms: on-premises and online. The latter form is called Visual Studio Team Services (formerly Visual Studio Online). The cloud service is backed by Microsoft’s cloud platform, Microsoft Azure.
It uses the same code as the on-premises version of TFS, with minor modifications, and implements the most recent features.
Visual Studio Team Services requires no setup. A user signs in using a Microsoft account to set up an environment, creating projects and adding team members. New features developed in short development cycles are added to the cloud version first. These features migrate to the on-premises version as updates, at approximately three-month intervals

Architecture

The following major features are included in Team Foundation Server:
  • Version control, for managing source code and other deliverables that require versioning.
  • Work item tracking, for keeping track of such things as defects, requirements, tasks, and scenarios.
  • Project management functions, which allow the shaping of a team project based on a user-specifiable software process, and which enable planning and tracking using Microsoft Excel and Microsoft Project.
  • Team build, for enabling a common process for building executable products.
  • Data collection and reporting, which aid in the assessment of a team project’s state, based on information gleaned from Team Foundation Server tools.
  • The Team Project Portal, which provides a central point of communication for a team project packaged as a Microsoft Windows SharePoint Services site.

  • Team Foundation Shared Services, which provide a number of common infrastructure services that invisible to end users but that are important to toolsmiths and extenders.
For TFS online Training visit: mindmajix/TFS training
Contact Info:
Mindmajix Technologies Inc
USA : +1-201 3780 518
IND : +91 9246333245
Email: info@mindmajix.com

What is  SAP Business One?

       SAP Business One is A single, integrated and affordable ERP solution for smaller businesses

SAP Business One, developed by SAP, has been built from the ground up for small to medium sized businesses that have outgrown their accounting-only or legacy systems and are looking for a single, integrated solution to manage their entire business

This specific ERP solution provides executives and managers with instant access to critical business information so they can confidently make informed business decisions, and offers more than just accounting

SAP Business One is a comprehensive solution that covers virtually all aspects of your business including finance, logistics/operations and customer relationship management.

Some of the SAP Business One Features

Accounting and financials – Manage your general ledger, journals, budgets, and accounts receivables and payables
  • Sales and customer relationship management – Manage the entire sales process from first contact to closing the sale and from customer data management to after sales support
  • Purchasing and operations – Control the entire procurement process
  • Inventory and distribution – Manage inventory across multiple warehouses and locations, and track and record stock movements
  • Reporting and administration – Create, manage, and distribute reports that help foster clarity in your business
some of the SAP Business One Benefits
  •  Spend more time growing your business using newly streamlined operations instead of reacting to details of day-to-day tasks
  • Respond quickly to customer needs by instantly accessing the information necessary to make confident business decisions
  • Eliminate redundant data entry and errors with a single, integrated system that improves process efficiency, minimizes costs and delays, and strengthens your bottom line
  • Form closer customer relationships via centralized information that makes it easier to manage customer communication and sales contracts
  • Lower your technology costs and achieve faster time to value by using a system that can be implemented quickly, is uncomplicated to maintain, and minimizes end-user training

Pay scale for SAP Business One Consultant
An SAP Business One Consultant earns an average salary of $92,500 per year.
source:payscale
Where can I find the best place to get trained on SAP business one
 MindMajix is the leader in delivering online courses training for wide-range of IT software courses like SAP business one
Contact Mindmajix For information on SAP business one
Mindmajix Technologies Inc
USA : +1-201 3780 518
IND : +91 9246333245