Introducing Mendix, the market-leading Low-Code technology by Siemens

Mendix is a platform for developing high productivity business apps. Its power and agile workflow make it possible to deliver apps quickly.

Mendix is an easy-to-use, low-code application development platform, with few resources that enables apps to be launched on the market in days.

From business applications that help factories run smoothly, to simple, easy-to-use tools for internal use to save time and boost productivity. It is the only platform that allows developers and companies to create apps with that added value that is rarely achieved.

One of its main features is collaborative work, as the environment enables restriction-free teamwork. All interested parties, whether those dealing with the business side of things or developers, will be able to intuitively participate through the web portal. Furthermore, it enables effectively managing and developing projects thanks to its agile task management environment.

Building Apps with Mendix and MindSphere

Companies in the industrial sector are undergoing development and growth; therefore, they need to create apps to offer quick solutions to clients. The connection that Siemens offers between Mendix and MindSphere is a unique and incredibly powerful solution. It offers everything needed to drive innovation in the industry.

MindSphere is the cloud-based IoT open operating system, from Siemens. It has the ability to connect all equipment and systems, extract data and convert it into valuable information for any business. All this information is very important when it comes to decision-making.

The environment is native to the cloud, scalable and very easy to deploy in order for the app to be ready in minutes. The Mendix architecture is extremely robust, well-prepared to create any type of app and have it working to its full potential thanks to the flawless maintenance of its servers in the cloud.

Mendix enables automatically sharing all the models between the “Mendix Studio Pro” and “Mendix Studio” advanced development tool; the latter being a web portal which is geared towards the business side of things, that is much more visual and with features that are not so development-specific.

Also in SAP S/4HANA

Mendix is the only low-code platform that can run natively on the SAP S/4HANA data platform.

Apps are at the heart of business success. Mendix complements SAP solutions by providing organisations with the fastest and easiest way to develop apps that drive customer experience, employee engagement and operational efficiency.

Thanks to advanced data analysis and taking advantage of the capabilities of the in-memory database; it is the most highly-rated platform by SAP when working with SAP S/4HANA, SAP C / 4HANA, SAP ECC and SAP SuccessFactors.

At Sothis, we work alongside our clients on a daily basis to provide new technological solutions such as Mendix. For them, it is very easy to see early progress and accompany throughout the decision-making process, which is extremely convenient thanks to Mendix´s agile workflow.

Achieving the result that the client expects, in the right way and as quickly as possible, is our only objective.

The right to data portability

Derecho a la portabilidad de tus datos

The current legislation regarding data protection, the GDPR and LOPDGDD, introduced updates to your rights. Besides the recognised “Right to erasure”, there is the new “Right to data portability”, which we are going to explain in this article to clear up any doubts that may arise from the literal interpretation of the definition given by the GDPR.

Article 20 of the General Data Protection Regulation defines the right to transfer and recognises that the data subject shall have the right to receive the personal data concerning him or her, which he or she has provided to a controller, in a structured, commonly used and machine-readable format and have the right to transmit those data to another controller without hindrance from the controller to which the personal data have been provided.

Under which circumstances? When:

  • The processing is based on the consent granted by the data subject to process his or her personal data for one or more specific purposes;
  • The processing is carried out by automated means.

In exercising his or her right to data portability, the data subject shall have the right to have the personal data transmitted directly from one controller to another, where technically feasible.

Interoperable portability formats

For its part, Recital clause 68 of the GDPR stipulates as follows:

To further strengthen the control over his or her own data, where the processing of personal data is carried out by automated means, the data subject should also be allowed to receive personal data concerning him or her which he or she has provided to a controller in a structured, commonly used, machine-readable and interoperable format, and to transmit it to another controller. Data controllers should be encouraged to develop interoperable formats that enable data portability.

That right should apply where the data subject provided the personal data on the basis of his or her consent or the processing is necessary for the performance of a contract. It should not apply where processing is based on a legal ground other than consent or contract. By its very nature, that right should not be exercised against controllers processing personal data in the exercise of their public duties

It should therefore not apply where the processing of the personal data is necessary for compliance with a legal obligation to which the controller is subject or for the performance of a task carried out in the public interest or in the exercise of an official authority vested in the controller.

The data subject’s right to transmit or receive personal data concerning him or her should not create an obligation for the controllers to adopt or maintain processing systems which are technically compatible. Where, in a certain set of personal data, more than one data subject is concerned, the right to receive the personal data should be without prejudice to the rights and freedoms of other data subjects in accordance with this Regulation.

Furthermore, that right should not prejudice the right of the data subject to obtain the erasure of personal data and the limitations of that right as set out in this Regulation and should, in particular, not imply the erasure of personal data concerning the data subject which have been provided by him or her for the performance of a contract to the extent that and for as long as the personal data are necessary for the performance of that contract

Where technically feasible, the data subject should have the right to have the personal data transmitted directly from one controller to another

On the other hand, Article 17 of the LOPDGDD under the subsection “The right to data portability” stipulates as follows: “The right to data portability shall be exercised in accordance with what is set forth in Article 20 of Regulation (EU) 2016/679”.

Inferred and derived data

Having said that, it is important to highlight that data considered as “inferred” or “derived”, understood as that which derives from the application of information generated during the implementation of the service through knowledge or skills belonging to the controller, are not subject to the right to portability; namely, from the application of skills that are part of the controller’s know-how (such as mathematical skills or those resulting from applying algorithms) to data related to the product or service.

The Guidelines adopted by the Working Party created by Article 29 of Directive 95/46/EC regarding exercising the right to portability were very helpful and clear. In this sense, the aforementioned guide from the Article 29 Working Party makes reference to the tools employed for data portability.

From a technical point of view, data controllers must offer different options for putting the right to data portability in practice. For example, they must offer the data subject the option for direct download, and at the same time allow data subjects to transmit the data directly to another data controller, which could be done by supplying an API; these are application interfaces or web services providing data controllers so that other systems or applications can link up and work with their systems.

When can the portability be exercised

In conclusion, it must be clear that the right to portability may only be exercised in the cases foreseen by Article 20 of the GDPR, and certain conditions have to be met with regard to the data that can be “ported”:

  • First condition: personal data concerning the data subject.
  • Second condition: data supplied by the data subject.
  • Third condition: the right to data portability shall not adversely affect the rights and freedoms of others.

Similarly, more information and forms for exercising the right to data portability can be found on the Spanish Data Protection Agency website.

The different types of Information Security audit

The role of security in organisations is growing increasingly important. The impact and likelihood of suffering a security incident should be minimised as far as possible. This is why many organisations are opting for proactive protection, by including audits in their security roadmaps.

This type of process can objectively identify security vulnerabilities and gaps. These vulnerabilities are linked to threat vectors that may compromise Information Security, such as people, processes, services, information, technology, facilities, and suppliers.

In my opinion, we can classify the different types of security audit into three main blocks based on the subject of the audit and the techniques used. These blocks are:

  • Information Security best practice audits
  • Information Security legal and regulatory compliance audits
  • Ethical Hacking audits

In this article we will look at each of these types of audit, defining how and under what criteria they are performed. We will also identify the type of expert involved in each one and, finally, we will analyse how they can help enhance security in organisations.

Information Security best practice audits

Firstly, we’ll discuss Information Security best practice audits.

When performing this type of audit, benchmarks or frameworks (either national or international) are commonly used. We use them to contrast the status of our organisation with the security controls in the benchmarks. Typically, these frameworks effectively cover every aspect that may compromise an organisation’s assets.

Some of the most well-known reference frameworks in this field are:

  • International Organization for Standardization (ISO 27000)
  • National Institute of Standards and Technology (NIST)
  • National Security Framework (ENS)

Frequently, organisations that have significant numbers of security requirements regarding their business processes define their own reference frameworks in line with the organisation’s needs. This comprehensive approach aims to provide a single, centralised view that prevents reworks.

This type of audit is generally performed by IT professionals who are specialists in Information Security and familiar with the reference frameworks of the audit.

Legal and regulatory compliance audits

One of the aspects to be considered in this type of audit is the effects of legal and regulatory obligations on Information Security in the organisation. That’s why the second type of audit we will look at are the legal and regulatory compliance audits.

This type of audit assesses compliance with security laws and regulations. Some of the most important are listed below:

  • Organic Law on Data Protection (LOPD)
  • General Data Protection Regulation (GDPR)
  • Law on Information Society Services (LSSICE)
  • Intellectual Property Law (LPI)
  • Critical Infrastructure Protection Law (PIC)
  • Prevention of Occupational Hazards Law (LPRL)
  • National Security Framework (ENS)

This type of audit is performed from a legal standpoint focusing on Information Security. That is why it requires a multidisciplinary team of specialist security lawyers and IT auditors that hold extensive knowledge of the applicable laws and regulations in this field.

Another critical area of a best practice audit is the protection of an organisation’s technological infrastructure, which must be audited, separately, from a more technical angle. That is why, in third and final place, we will discuss Ethical Hacking.

This type of audit realistically simulates the actions of cyberattackers using technical tools and resources to test the robustness of technological infrastructure and, specifically, information systems.

Ethical hacking

In Ethical Hacking audits we can distinguish between vulnerability audits, penetration tests, and Red Team testing. Each of these types of audit has specific features and restrictions, such as scope and the type of technical resources to be used. However, the aim of these tests is to find security vulnerabilities or gaps in the organisation’s technological infrastructure.

This type of audit uses methods and standards to ensure effective results. Some of the most widely used methods are Open Source Security Methodology Manual (OSSTMM), Center for Internet Security (CIS), Open Web Application Security Project (OWASP), and MITRE ATT&CK.

This type of audit is generally performed by IT professionals who are specialists in cybersecurity. They have extensive technical expertise and in-depth knowledge of programming and information security.

As we have seen in this post, we can differentiate between three main blocks of security audit which are performed by experts with different skillsets.

On the one hand, best practice audits are aimed at risk management and help us asses the threat exposure of an organisation to provide an overall view of the status of its Information Security. On the other hand, legal and regulatory compliance audits assess the organisation’s culture of compliance with the ultimate aim of avoiding fines. Lastly, Ethical Hacking audits aim to test the resilience and protection of the organisation’s technological security infrastructure.

Each of these types of audits help provide indicators that enhance the security maturity of the organisation as part of the process of continuously improving Information Security Governance.

Determination of VAT in SAP Transportation Management operations

The evolution of transport management systems allows companies to centrally manage all operations in the same flow, from loading through to transport and receipt of goods in different locations.

At SAP we can invoice these services automatically, generating the information from the transport module and sending it automatically to accounting.

What is VAT?

In order to discuss determining VAT in SAP Transportation Management operations, we will begin by defining what VAT is.

VAT is a value added tax, which increases the value of the in both national and foreign territory and takes place in the sales transaction of products and services. In Spain there are three types of VAT: general (21%), reduced (10%) or super-reduced (4%). There is also VAT exempt status (0%) for products or services which are exempt from payment for a legal reason.

For the transport sector, VAT is determined according to the business activity of the service provision.

VAT on transport

In SAP, transport operations are managed in the SAP TM (Transportation Management) module. For both the payment of transport costs and for invoicing the client for the service, the TM processes are integrated with purchasing (MM) and sales (SD) processes.

At this point, the amounts to be passed on to the costing and income accounts, as well as the tax (VAT) to be paid in each of the operations, are determined automatically.

Payment process of transport operations.

Types of operations

In TM operations there are a wide variety of scenarios including: exports, imports, customs, in-house transport, subcontracted transport, etc.

As we can see in the previous diagram, once the transport and / or management processes have been carried out, the settlement is launched in TM generating a sales invoice or a purchase order to be invoiced, both in ERP, which is then transferred to accounting.

An important part of the TM document flow is related to the settlement of dispatch orders or freight orders, as the VAT is determined in the transfer to SD or MM. Some of the parameters that can be used for this determination are:

  • Tax classification of the client: national, community, foreign, etc.
  • Tax classification of the service: Due to the complexity of transport operations, there may be services that are always exempt from VAT, always subject to VAT, or whose VAT percentage may vary depending on the characteristics of the operation.
  • Origin and Destination of the goods.
  • Tax regime of the goods: customs warehouse, non-customs warehouse, free…
  • Customer type: whether a shipper, freight forwarder, etc.
  • In the case of customs operations, whether or not expenses included in the SAD (Single Administrative Document) are included.

Determination of VAT in transport operations

In accordance with the points explained above, the VAT determination process will be as follows:

  1. In TM the rates are defined based on cost classes that correspond to each of the billable elements. These could include freight, fuel surcharge, waiting time, etc.
  2. In the purchasing section each cost class will be associated with a service, and in the sales section, with a price condition class. Tax determination will be carried out separately for each element, so each invoice line issued and / or received may have a different tax status to the other lines. For example, on a single invoice, the freight could be subject to VAT, whereas the origin charges (commonly known as “FOB charges”, prior to freight) may have a different VAT percentage, or not be subject to VAT at all.
  3. In each operation, the user will choose a cost class in MT and, depending on the characteristics of the operation (national, community or third-party customers, origin and destination of the merchandise, etc.), the relevance of VAT and, if relevant, the percentage to be applied. This will be transferred to SD and MM, where the amount and tax indicator will be determined in each case.

As we have seen, in the process for determining VAT in MT operations, there are many possibilities to adapt the VAT allocation according to the flows of transport operations and other parameters. In this way, most of the cases that occur in this type of company can be covered.

Spanish Data Protection Agency warning: data protection a necessity when using thermal cameras

cámara termográfica control de temperatura

Thermal cameras are taking on a major role in temperature monitoring at workplace or shop entrances. Although their preventative value in monitoring the temperature of those accessing busy places is clear, we cannot forget that taking images, even when they are not stored, of people who want to access a building has data protection implications.

Furthermore, body temperature is health data and, therefore, must be stringently protected, as the Spanish Data Protection Agency (AEPD) warned in a recent statement.

In any case, temperature data collecting must be governed by the principles stipulated in the General Data Protection Regulation (GDPR), including the principle of lawfulness. This means that this processing must have a lawful basis as set forth in data protection legislation on special categories of data (articles 6.1 and 9.2 of GDPR).

Thermal cameras in the return to the office

In a workplace setting, a possible lawful basis could be the obligation of employers to ensure the health and safety of their workers in matters relating to their job. This obligation would also serve as an exception that would allow the processing of health data and as a lawful basis for processing.

However, GDPR also requires in such cases that the regulation that permits this treatment must establish appropriate safeguards. These safeguards must be specified by the data controller.

How to ensure regulatory compliance

What can you do to ensure data protection if you use one of these cameras in a business, an office building, or any other setting? We recommend you follow these steps:

  • Update your record of processing activities.
  • Advise that the camera is being used.
  • Analyse the need for a privacy impact assessment (PIA).
  • Conduct a PIA if the analysis gives a positive result or signals risks.
  • Set up a data disclosure protocol and another concerning the right not to be subject to automated decision-making.
  • Ensure security measures comply with article 32 of GDPR.
  • Establish procedures for data processing in accordance with article 5 of GDPR.

Rights and safeguards

When using this type of system it is very important to remember that data subjects still have the same rights under GDPR and the other requirements of the Regulation, whilst adapted to the specific conditions and circumstances of this type of processing, are still applicable.

Accordingly, measures concerning providing workers, clients, and users with information on this processing (particularly if their data is to be recorded and stored) must be considered. Likewise, other measures allowing those who record a higher than normal temperature to respond to the decision to refuse them access to a specific site (for example, by stating that their raised temperature is due to other reasons) should also be established. Staff must, therefore, be qualified to assess these additional reasons or a procedure must be implemented that ensures the appeal is handled by someone who is capable of, where applicable, granting access.

It is similarly important to establish data storage periods and criteria for cases which are recorded. In principle, given the purposes of processing, this recording and storage should not occur, except when it can be justified by the need to contest any possible legal actions arising from the decision to deny access.

We can help you

If you are thinking about installing one of these thermal cameras or already have one but are not sure you have followed these steps properly, remember that at Sothis we want to help you drive forward your business with data protection guidance from our specialist Information Governance and Security department.

Tell us about your case and we can help you.

Physical cyber attacks: when the danger lies on a USB

A company can have several security measures in place to mitigate and protect it from external attacks. But what about physical attacks? What would happen if someone from the company or external members of staff were to connect a malicious USB to a production machine? Do we have the means and security levels that are required to detect system changes? We will tell you in this article.

What are Industrial Control Systems?

Industrial Control Systems are the hardware and software connected to the network to manage and monitor a critical infrastructure. These technologies include supervisory control and data acquisition (SCADA) and distributed control systems (DCS), industrial automation and control systems (IACS), programmable logic controllers (PLC), remote control units (RTU) and other services such as: control servers, intelligent electronic devices (IED), human-machine interfaces (HMI), sensors or the use of the internet of things (IoT), amongst others.

What are the main risks posed to industrial environments?

Transmission of unencrypted passwords.
Trivial or default passwords.
Known vulnerabilities in out-of-date software or firmware
Lack of segmentation between the IT and OT network.
Identification of unauthorised devices.
Compromised third-party hardware or software.
Unprotected remote connections or ones with trivial credentials.
Directed phishing.

Another risk that is not on the previous list and is important is physical intrusion.

Our employees, the weakest link

Today I’m bringing you an industrial forensic case, where there is a filtration of information from the R+D department of the company to the competition, and a 30% loss of productivity due to a lack of availability of the plant’s machinery.

Picture 1: An example of an industrial electrical panel

Execution and result report after daily monitoring

The machines at the plant are not connected to a network with an internet connection. After analysing the plant’s network traffic for several days, it has been proved that the attacker is using an Asus system with a MAC address ending in 84:5e:41 to modify configuration parameters on a Siemens PLC.

Picture 2: Evidence obtained with Wireshark

With the obtained information and comparing the list of the active IT devices provided by the IT department was enough to locate the compromised system at the plant.

Picture 3: Ficticious evidence from the compromised system

The system is built into a box that is protected with special screws. Disassembling it would take a long time and our attacker doesn’t have that time, so the only viable option is to use the USB port.

Compiling the event records of the USB connections, some very useful data can be found such as the product identifer, the manufacturer or the serial number.

Picture 4: Evidence from the syslog file with USB information

The record has 900,000 lines and so performing this analysis manually would be a very difficult and expensive task.

Picture 5: Total number of lines contained in the file

In order to perform this task, we have created a script with Python and it will automatically compare a list of serial numbers on the USB devices that the company uses with the “syslog” file.

Picture 6: Serial numbers of the USBs used by the company

In just a few seconds it was possible to list a serial number that was not on the list of USB devices authorised by the plant.

Picture 7: Evidence of a missing serial number

With this serial number, locating the dates and times of the USB device connections is made possible, as well as monitoring the operators and external workers that were on that production line on the stated dates and therefore identifying the supposed attacker.

Picture 8: Information from the USB device

Could it have been detected before?

Of course, with a self-managed service like the one provided by Sothis where we have network intrusion detection systems, log records and audits, incident management and real-time event monitoring from our SOC.

Thanks to this technology, we are able to detect:

Firmware changes or PLC, RTU or HMI replacement.
Lists of devices from non-validated IP addresses.
Delay in responses from the OT devices.
Failed authentications from HMI or PLC devices.
User account creations on the Operating System.
Internal or external attacks on the industrial network.
Surveillance on the exposure to the internet.
Controlled and manageable VPN access.
Firewall rule changes.

Migration Cockpit on S/4 HANA

Companies are compiling a larger amount of data at their centres more often, especially in predominantly technology-based industries, with somewhat different aims depending on the sector. This data could contain confidential information, and so handling it requires great caution. The arrival of S/4HANA has significantly altered the data migration process on SAP.

The evolution of data migration and adapting SAP

The trend towards an increase in data volume handled by companies globally is clearly still on the up, as predicted by the UN Economic Commission for Europe. This trend will undoubtedly continue to rise in future years.

Regarding the deployments of our technological solutions, the data migration process between the old client system and the new one is a critical process. Handling the tools provided by SAP correctly is key to processing the data pending migration.

Until the arrival of S4/HANA, the tool that was available for this process on SAP was the Legacy System (LSMW), which the consultant used to configure their own migration file by following these steps:

Generally speaking, the files used for mass data migration were created from scratch, by building a file with all the necessary fields in the objects that would subsequently be migrated to the new system. This was often a repetitive and tedious process until a template was obtained that could successfully migrate all the desired objects.

With the arrival of S4/HANA in 2015, the data migration process changed significantly with the Legacy Transfer Migration Cockpit, which resulted in an improvement.

Advantages of the new migration cockpit

The work flow follows the same structure: there is a data extraction from the old system onto a set of data files and it is migrated to the new system. But now SAP has a much more guided and pre-configured process than could have been hoped for in previous versions.

The migration cockpit is structured by projects which include the migrations objects, that in turn contain the migration files of each of the objects.

The cockpit includes pre-created templates with many of the fields used in the different migration objects. This facilitates and streamlines the consultant’s work, who only has to adapt the template to the pre-existing SAP object. The configuration of the fields is also ready.

The files are easy to configure using the Legacy Transfer Migration Object Modeler, which allows for them to personalised by adding or eliminating fields and even programming rules for each of the fields depending on the client’s needs. The hierarchy of the fields is also defined using this tool, as well as the enforceability of each one.

Processes such as field mapping that are added to the file are now done using Drag & Drop and it is much more visual and user-friendly:

If we want a specific file to be migrated, we just have to drag it from the list.

On the other hand, the transfer of data to SAP involves 4 steps:

  1. During the data validation, it checks that the file’s structure is correct, verifying that the format in which the data has been inputted is suitable.
  2. If necessary, some of the values will be converted into the format required by SAP.
  3. Lastly, before executing the import, there will be a simulation to check that all the data about to be migrated will do so to the SAP environment correctly.
  4. Execution.

Conclusions: The future of data migration on SAP

This tool is expanding SAP’s functionaity with each of its new versions with new templates.

An example are the 16 new templates that are included in the 1909 version, with a total of 85 migration objects, with which it is hoped that this extremely critical and delicate data migration process between different systems can continue to be possible.

Further to this, it should be noted that this tool is valid for both of SAP’s On-Premise and Cloud systems.

Business trends vs. technological realities

We are used to seeing technological trends on the market separated by the way that companies or different types of organisations work, when in reality they are closely linked. In this article we will briefly explore this interaction.

If we explore business activity, we can see how the way a company or organisation works (e.g. the consumption models) and is modified by technological trends (e.g. mobile apps) and, in turn, the use of mobile technologies drives changes in the way we interact with service providers (such as public authorities, through their digital channels).

The reality is that both of them intersect, given that they supply one another and gain better results when they interact in harmony.

Something quite common for all of us is buying online, which is an example of business trends. There are new challenges in this business model, mostly in what we like to call the last mile, for organisations that are used to dealing with their clients in store. An essential element in a company’s relationship with their client is the user experience; if it’s good, then they will go back, but if it’s bad, they will cross us off their list and post about it on social media.

This is where one of the biggest challenges lies: how to guarantee that user experience when the means of service provision are out of the organisation’s hands. Generally speaking, online sales involve distribution logistics, the quality of which is beyond the seller. We often see complaints from clients regarding the delivery of their purchased products, which has a direct impact on the seller and not the logistics company. It is therefore important to have mechanisms in place that will allow us to monitor the quality of the end-to-end service for our client.

An example would be monitoring mechanisms designed by the owners of the product that state the product shipment cycle or a rapid response mechanism in the event there are any problems detected by the client, whether it be via a social networking account, a call centre that can respond to a review posted on these sites or the definition of KPIs with the logistics companies (measured depending on the delivery speed and the client’s satisfaction with the service). These are just a few of the points that could be touched upon along the provider’s value chain.

Handling information

This leads us onto another very contemporary point: handling information, which is all the more profuse and often not properly structured. How can a company handle the amount of data that is generated with hundreds or thousands of separate orders? And how can information be extracted from this sea of data in order to be useful in improving the optimising sales processes, for example? At this point we are engaging in technological solutions that include automisation, on-demand processing on the cloud and the creation of control panels geared towards continuous improvement. Here is where a technology appears to provide a response to large-scale automisation, with Machine Learning (ML) being one of the first technologies that can be applied in order to reach this objective, followed by Artificial Intelligence (AI) as an additional step in this large-scale information automisation and extraction. Without a clear strategy for handling data and obtaining important information, the sustainable growth of any organisation is impossible, hence the importance to dispose of these technologies as a response to our client’s evolution.

Coupled with this client evolution, we can also say that there is no longer just one point of contact with clients or with a business user, as was the case during the computer era. Nowadays we have multiple devices that allow us to use different interfaces according to needs, as has started to be the case with the so-called “wearables” (such as smart watches, pressure monitors, etc.) that incorporate additional elements of interaction with our clients and users, which in turn allows us to offer different experiences depending on the moment. These devices could be considered within the personal IoT range, however, there is also the industrial IoT, which requires a shorter latency period in order to work efficiently. Thus, the need arises to develop what is known as edge computing, whereby the decisions are made in-situ, as acting in real time is vital.

Stand-alone devices

An example of this trend can be seen in the growing use of stand-alone elements (such as drones, robots, machinery, etc.). These devices work with a range of intelligence that includes semi stand-alone devices and those that are completely stand-alone and operate in different environments, such as air, sea or land. These stand-alone elements capture a large amount of data and process it using AI to perform tasks that would normally require a human. This entails local processing, sending data to the cloud (or a central location for collecting historic data), communications links with a broad scope of operation, security mechanisms and strict controls and highly automated handling centres. We could say that the evolution of these systems will lead us to driverless cars on the roads in the not too distant future.

A trend that is starting to be developed on the public cloud to meet these processing needs near to the point of decision is what is driving the creation of the distrbuted public cloud (as Gartner calls it), whereby the public cloud provider services are no longer centralised and they migrate out of their data centres. However, the service provider is responsible for all the aspects related to the service’s architecture. This trend will create a new operational and service model of the services on the cloud.

This fast technological evolution, the availability of multiple channels of interaction and the demand from clients and users for greater readiness and response speed also creates a trust crisis. How can we guarantee that the information that organisations gather in order to provide better services is processed appropriately? And, on the other hand, how does the organisation recognise the growing responsibility to compile, store and use information in a trustworthy and auditable way, as well as having mechanisms in place to minimise the theft or improper use of said information?

Guaranteeing security

This is where the government and security management comes into play, along with the implementation of technologies that allow for the security to be managed. It should be noted that ML and AI are also going to play an essential role in guaranteeing company security, considering the exponential growth of the points of presence and data collection, which involves controlling and protecting more information; being able to scale by automising and recognising patterns requires a large processing capacity and it seems obvious to acknowledge that tools such as ML and AI will be fundamental in managing actions that the growing quantity of events will demand.

An aspect that generally tends to be overlooked is the broad scope of the communications, their capacity to transport information and the latency period inserted into the link. These are critical when services are required in real time and, even if progress is sustained, it is also true that changes in traditional architectures are needed in order to provide for current business needs; on the one hand, users have more residential and mobile capacity, and on the other, we are rolling out control elements and management more often at points where traditionally a visit from an operator was required. Besides this, we are also gravitating more towards cloud services.

BIM and Concrete: the future of the construction sector

In this article we will analyse one of the biggest trends in the digitalisation of the construction sector: BIM methodology and how SAP Concrete can help us make our projects more efficient.

What is BIM?

BIM (Building Information Modelling) is one of the most innovative and promising pieces of technology in the construction sector today.

Unlike many others we can think of, BIM is more than a simple data format or 3D modelling tool. It is a collaborative working methodology for the creation and management of construction projects that enables end-to-end analysis and management from the initial phases to the end of its useful life and aids decision-making by all construction project participants (developer, constructor, architect, engineer, and so on).

Construction projects have faced two main problems in recent years:

  1. Lack of interoperability between collaborator software, and all the barriers that throws up in terms of exchanging information.
  2. The unique conditions of the construction sector, widely evidenced by cost overruns and extended deadlines, influenced by design changes, scheduling, risk uncertainty, lack of communication and so on, which, in turn, affect price, deadlines, and even quality.

The BIM methodology has been developed to minimise all these problems and consolidate all the relevant information concerning a construction project in one single place for subsequent analysis and management. What’s more, this information, included in the modelling, can be modified, extracted, entered, and updated, which allows for responses to all changes that may happen over the project lifecycle (in other words, it isn’t a snapshot of the initial phase of the project in the way that a printed plan is).

For a better of idea of what all this means, we just have to think of the following example: to date, walls have been sketched on a 2D plan using 4 lines. In BIM systems, a wall is a unique element including graphic and non-graphic information. The graphic information provides numerous views which allow the same wall to be viewed from different perspectives and if a decision is made to change it, it will automatically be modified in each one. This automatic modification ensures savings in reworking and reduces inconsistencies between different views.

What is IFC?

IFC is a global, open, standardised data format for the exchange of data between different software systems, similar to a PDF file applied to BIM, which is intended to boost interoperability between different programmes.

In BIM, software from different companies (ArchiCAD, Revit, and so on) can be used and the IFC format acts as a bridge when exchanging valid information, regardless of the software being used.

IFC has been designed to produce all the building information throughout its lifecycle, including final documents such as assembly instructions, maintenance information, repair instructions, and so on.

The need for IFC and BIM

As BIM has become mandatory in certain public sector bidding processes, IFC is even more important as bid specifications cannot stipulate the use of a specific software brand. The only option, therefore, is adapting to the single, standardised IFC format.

This means that from now on public bodies will request the plans in PDF format and the IFC file of the BIM model.

BIM dimensions

“Dimensions” are one of the most important concepts in BIM. These are basically collections of information of a specific type that differentiate project phases based on the nature of the data.

They can be briefly summarised in the following manner:

The first dimension, 3D, relates to the graphic design phase.
Next, we can form a central block in which we would place 4D and 5D, which cover the project construction process.
Lastly, 6D and 7D concern environmental analysis and management of the construction lifecycle.

SAP concrete 3.0

The Concrete tool, developed by Sothis and certified by SAP as a standardised vertical solution for the sector, is aimed at the planning and management of project execution in timeframes, measurements, unit prices, performance, resources and costs, and the management of certification held by external companies such as subcontractors, industrial firms, and so on.

Furthermore, all this execution management can be carried out over the full lifecycle of the project, which makes it possible to restructure the project at any time, by updating the projected unit costs to the prices actually paid, and perform monthly progress-based costs forecasts, production oversight, and inventory management. All this information can be exchanged with other 4D and 5D software such as Arquímedes, Presto, MS-Project, and so on.

Sothis has made great progress in this area through its vertical Concrete solution and has moved with the times in the sector by making it possible to integrate with BIM methodology as its 3.0 software manages the 4D and 5D dimensions, which are the planning and cost management phases of the project, respectively.

How we apply the lean methodology in our projects

Last month we got a new project underway with our client and partner Grupo Ubesol.

Just like in every project, as a technical team, the first thing we do is to understand the user´s needs and expectations, as they are the ones who will use our product. For these purposes, we pool ideas based on what has worked before, experience working on other projects, etc.

We decided to break away from our past experience in order to apply a model widely adopted by agile methodologies: the lean inception (with some adjustments). Why lean inception and not a traditional meeting between the different heads of department (or not only with them)? Because it puts the user´s knowledge, his/her expectations, needs, skills, at the heart of the decision-making process. Because it allows the product user to choose how he/she wants to prioritise and organise deliveries. Because it gives us the peace of mind inherent to following a transparent process, where communication is ongoing and where deliveries are agreed with specific results.

For us, the project´s success lies in the end user being pleased with our product, and being able to provide him/her with as much information as possible, while intruding into his/her day-to-day life as little as possible. And without having extensive knowledge about who is who in the project, and what everyone does, we believe that this result is difficult to achieve.

How does this lean inception work?

Through some practical exercises or workshops, the aim is for as much knowledge as possible to be shared between the technical team and end user about the needs that the project needs to solve.

We kicked-off the project with a quadrant in which the technical team and users established what the product we are going to work on is, what it does, what it is not and what it does not do. In this way, we lay a clear foundation that enables us to understand the scope of the project. Furthermore, we defined the project objectives and looked into issues such as what they expect to achieve with the implementation of the new product, making it clear to both parties what needs are to be met.

Our next activity was to define the profiles of the users who were going to be involved in the project, their expectations, their objectives, their skills and a very important point: their responsibilities. This allowed us to understand who was going to use our system and to think about how they would use it.

Once we had clear objectives in mind and knew who was going to use the system, we started to build the castle according to the functionalities we wanted our system to have. Carte blanche to ask for anything… with only one condition; the functionalities had to be defined by applying the pattern:

“Given a context…

When an event occurs…

Then a result will be obtained…

To meet the target…”

It is a system inherited from Behaviour-Driven Design that forces us all to think and define acceptance criteria along with functionality.

Implementation of the functionalities

The next thing we did was to “draw” how we wanted these features to be implemented, the sketching. Using a whiteboard and marker, we outlined the user experience (UX) together with the user: which fields, which hyperlinks… everything necessary to make it as useful as possible.

The last stage of the process was one of the most interesting and I think one of the toughest for the client. We had to order and prioritise all the functionalities that were to be included. We used the MoSCoW pattern for prioritisation (Must, Should, Could, Wouldn’t). We forced each profile to mark at least one functionality with each category, which forced them to think whether or not what they had asked for was really necessary.

Based on the data collected, we have drawn up a functional document and will organise blocks of deliveries in iterations. Right from the first delivery, we will be providing functionalities ready for the productive environment, providing value to the client from the get-go.

We were left with many of the activities proposed by the model in the inkwell (sizing, user travel, MPV canvas, …), but despite this, it was a session in which both the Ubesol Group and Sothis provided a lot of knowledge, and set consistent objectives, without false expectations, with absolute transparency…deepl