PRIMRE/Telesto/Testing and Measurement/Test Planning

From Open Energy Information

Ecurrye

Test Planning

Benefits of Comprehensive Testing

Testing is often viewed as an opportunity to demonstrate a technology from an operational perspective. However, testing is an opportunity to learn as much about a technology as possible and therefore should also be viewed from a research and development (R&D) perspective to maximize the knowledge gained. The MRE industry currently lacks the extensive testing experience of aerospace, wind, and many other mature industries that can be leveraged to rapidly advance the technologies. Thus, a comprehensive and incremental testing regime is a necessary component of MRE technology development to move technologies from concept to open-water testing.

Early stage marine energy converter (MEC- wave, tidal, current, wind, etc.) testing is critical to advancing the technology from lower technology readiness level (TRLs 3-5) to higher Technology Performance Levels (TPLs). The beginning stages of testing often involve small scale physical prototypes that dynamically represent the full-scale concept. These physical models range from complete MECs tested in a tank to a single component, such as a novel PTO, tested on a dynamometer and scales typically range from 1:100 to 1:10. Coupon testing of materials and coatings also occur at these early stages of technology development. These small-scale prototype tests provide many benefits including validating operational principles and numerical models, developing control strategies, characterizing loads, and validating performance predictions early in the design cycle when issues can be addressed quickly at lower costs. Further, design iteration at smaller scales can be completed with lower costs and shorter timelines than at larger scales (higher TRLs)[1]. Lower TRL testing also provides important data and knowledge that informs later stages of design.

The process of advancing to higher TRLs (6-9) includes larger scale components, system integration, and full-system open-ocean testing. At these larger scales, testing progresses from isolated component tests, to integration tests prior to a sea deployment of a complete MEC. Larger scale component testing is performed to evaluate a single element of a MEC under expected, extreme, and emergency conditions. These tests occur in isolation so that a single component can be fully evaluated and improved without the complex interaction of a complete system. Because MECs consist of many components, incremental integration testing is also recommended to detect and solve interaction issues before the complete system is assembled and root causes are harder to identify. The final stage is field testing at large or full scale to evaluate the complete system in its intended operating environment. There are numerous benefits to incremental larger scale testing including optimizing energy capture while minimizing loads, lowering LCOE, demonstrating performance, mitigating environmental challenges, developing manufacturing methods, and ultimately obtaining certification.

Figure 1 - Advancement of a New Technology Towards Cost Competitiveness and Commercialization: Starting in the center is the concept of a new technology. As the technology progresses from lab to field validations, the understanding of the design and the TRL level increases. Effective and thorough testing at the lower level is essential to understanding all design considerations and eventually producing a tried-and-tested system.

Testing Levels

Testing of MRE technologies is typically conducted at four basic levels:

  1. Unit Tests: Tests that involve one or more test specimens evaluated in a controlled environment under prescribed conditions. Examples of unit tests include:
    • A set of material coupons immersed in a saltwater environment for a prolonged period prior to being placed on a tensile test machine to evaluate the impact of saltwater intrusion on material strength
    • Lap-shear coupon testing for determining mechanical material properties of novel material and adhesion methods
    • Metallography to identify grain structure and the associated material characteristics for electric machines and metal components
    • Fiber burn off for determining volumetric ratio of FRP
    • Concrete compression testing for characterizing micropyle structural integrity
    • Solar radiation resistance testing to determine a material’s ability to withstand constant sunlight
    • Biofouling prevention coating testing to determine the material’s ability to resist deterioration from biofouling
    • Non-destructive testing of materials to identify defects in materials
    • Cold climate testing of materials
    • Mechanical seal integrity testing to prevent premature seal failure

  2. Component Tests: Tests that involve a component or subassembly that is evaluated under prescribed conditions. Examples of component tests include:
    • A tidal energy foil attached to a rigid test stand that is excited with moving masses to evaluate the dynamic response, characterize fatigue, and identify structural defects
    • A wave energy PTO (drivetrain and generator) attached to a linear dynamometer and driven at wave frequencies to identify defects and evaluate efficiency, power output, fault response, wear, noise, and self-heating
    • Modal testing to determine the resonant frequencies of a system
    • Electric Motor Power curves to characterize motor performed under variable loading
    • Mooring system structural testing to ensure mooring lines are capable of the expected loading in extreme weather conditions
    • Component life cycle testing to prevent the fatigue failure of critical components
    • Lightning testing to ensure component is able to handle lightning strikes
    • Hydraulic system testing to prevent the premature failure and environmental damage from leaks
    • Cold climate testing of components
    • Hardware (component) in the loop testing to add confidence in simulation quality

  3. Laboratory Tests: Tests that use a test tank (wave flume, wave basin, tow tank etc.) to evaluate MEC technology under a series of prescribed resource conditions. The test articles can range from a sub-scale (1:100 – 1:10) prototype of a utility scale MEC to a full scale PBE (powering the blue economy) MEC. Resource conditions may include one or a combination of currents, waves, and wind. Examples of laboratory tests include:
    • A 1:100 scale wave energy converter concept tested in a wave flume to evaluate the WEC response to monochromatic waves and to collect data to tune the WECSim model.
    • A full scale PBE off-grid river current turbine tested in a flow channel to characterize power performance and evaluate the power electronics while connected to a nanogrid.
    • 1:15 scale WEC (wave energy converter) deployed in a wave basin to tune and validate the control system to maximize energy production while minimizing loads in a combination of regular, irregular, unidirectional, multidirectional, operating, and storm seas
    • Heated tank for ocean thermal energy conversion
    • Small scale desalination with a salt water brine tank
    • Underwater autonomous vehicle deployed in a wave flume or tow tank for complete system testing

  4. Field Tests: Tests conducted in oceans, rivers, or other locations where the test article is subject to the natural, uncontrolled occurrence of the resource and environment, up to and including extreme storms. Test locations are chosen where resource conditions can range from benign conditions in sheltered sites to test newer technologies to full open-ocean sites where a higher TRL test article will experience a full range of conditions expected in its operating life. Field test sites must be judiciously chosen so that resource conditions are representative of the test article design conditions and that extreme events are not likely to exceed survival design load cases. Examples of field tests include:
    • A 1:4 scale wave energy converter deployed in the ocean for the first time in a sheltered site with resource conditions that mimic an open-ocean site, but at 1:4 scale. The test article is used to evaluated deployment and recover strategies, while loads and motions are measured.
    • A full scale, grid connected wave energy converter deployed offshore where it is exposed to full open-ocean conditions for a duration of one year to develop its power performance matrix, characterize its power quality, measure structural loads, and evaluate the storm survival strategy
    • A commercial ready tidal energy converter deployed in an tidal channel for final conformity assessments to the IEC TC114 technical specifications prior to purchase by a tidal energy developer
    • Deployment of a full scale autonomous vehicle programmed to monitor local wildlife (?)
    • Deployment of a waves-to-water full scale desalination system to remote oceanic communities

Test Planning

Success of a test often depends on a well-written test plan, which should be referenced and updated at all times during planning and execution. Additional reasons for developing a test plan include:

  • Communicate the testing to participants and stakeholders
  • Facilitate a comprehensive review that will help to ensure all aspects (risk, safety, testing needs) have been properly considered
  • Provide a systematic guide to setting up, executing and decommissioning an experiment
  • Support any regulatory or legal review and approval


In many instances, such as when testing to a standard or when a test is sufficiently different or independent of other tests, independent test plans are recommended. Understanding the testing process is essential to developing an effective test plan that will yield the highest probability of success, including safety and delivering defensible data. The following flow chart provides a high-level summary of the testing process.

Figure 1 - Summary of the Testing Process: Each element of the testing process is essential and must be completed to ensure that the maximum value of a test is achieved.

Scope and Objectives

The first step in developing a test plan is to clearly define the scope and objectives. This will keep the testing focused with a clear understanding of what needs to be done. The purpose of the test and what quantifies a successful outcome must initially be defined as this helps to further clarify the types of tests, facilities, infrastructure, measurements, and analyses that are needed. A few reasons to test components, sub-systems, and systems include:

Verify working principles and system function Measure motions and station keeping
Collect data for numerical model verification and validation Develop, optimize, and evaluate control systems
Characterize and evaluate environmental stressors Validate power electronics and characterize power quality
Conformity assessments and certification Understand and evaluate environmental interactions and effects
Component integration Evaluate safety response
Evaluate component bonding Establish power absorption and output power matrices/curves
Measure structural loads Gain experience in installation, operational, maintenance and recovery
Characterize fatigue and wear Evaluate storm/safe mode operations
Evaluate selected materials and coatings Quantify operation and maintenance costs
Device qualification/commissioning Determine equipment, vessels, and procedures needed at all life-cycle stages


As part of defining the scope and objectives, it is also recommended to develop a high-level list of test requirements that define:

  • Scale and function of the test article
  • Type of tests to be performed
  • Selected facility and facility capabilities
  • Applicable standards and guidelines (and what sections will be followed)
  • Measurements
  • Data products

Elements of a Test Plan

Once the scope and objectives have been determined, the detailed sections describing the test and how the scope and objectives will be met can be drafted. For accredited testing under an IEC Technical Specification, the accredited test facility must review and approve the test plan for the portion of the testing that is to be accredited. The following list provides an overview of the various sections of a test plan that will be expected by the accrediting test facility.

Introduction and Background
Provision of basic background information for the test and equipment under test. Specific subsections should include:

  • Test scope and objectives
  • Test duration
  • List of reference documents (DAS and instrumentation design and engineering drawings, test article engineering documents and drawings, support equipment manuals, interface documents, project plan/Gantt Chart, etc.)


Roles and Responsibilities
A table of roles and responsibilities of all participants that includes names, affiliations, and contact information.

Description of the Test Article
A description of the test article that includes:

  • Renderings or drawings
  • Dimensions and weights
  • MEC and mooring configurations
  • Identification of the components
  • Other information that helps familiarize readers with the MEC
  • Identification of intellectual property rights (this test plan should be shared amongst test participants)


Description of Testing Site or Test Facility
For a test site, this section should provide an overview of the staging site, test area, shore infrastructure, and support equipment, such as vessels. Information should include bathymetric map of the site, deployment coordinates of each article to be deployed, historic metocean conditions at the site, and any information on site calibration and valid measurement sectors that will govern data acceptance/rejection. For each test article, information such as watch circles, anchor locations, and cable runs should be provided. For a test facility, such as a wave flume or structural test stand, this section should provide an overview of the test facility, the test equipment, and testing capabilities.

Test Equipment
A description of the proposed test and measurement equipment including manufacturer, model, accuracy, location, traceability of instrument calibrations and measurements. For systems with many sensors, it is recommended to establish a naming convention for individual sensors that is maintained throughout the testing process. Also, it is recommended to plan for and document equipment redundancy for critical components that may experience failure during the testing process.

Test Overview
A high-level overview of the types of tests that will be conducted, and the goal of each. This section should also include a detailed Gantt chart of testing activities.

Test Procedures
The step-by-step testing procedures for each test along with pass/fail criteria and capture matrices. The capture matrices are a critical component of the test plan as they define the data requirements of each test. For example, a capture matrix for a power performance test would define the amount of data needed under different resource conditions to ensure a valid power matrix/curve per the IEC power performance technical specifications.

Reporting
A list of reports with expected content and due dates. Reporting of the test should include but is not limited to test readiness, status updates and final reports.

Data Management
A detailed description of how the data will be handled and by whom. This section should include the following:

  • A list of the types of data that will be collected
  • Descriptions and charts of data flow, quality assurance procedures, quality control checks, processing, products, and storage/backup
  • A description of the metadata
  • Identification of data access and sharing limitations


Appendix 1: Safety Compliance and Safe Operating Plans
A collection of the relevant:

  • Safety checklists
  • Compliance certificates
  • Safe operating plans
  • Personnel training and competence


Appendix 2: Operating Procedures
A collection of the procedures for operating the test article and measurement systems. These should include procedures for startup, normal operating, extreme operation, maintenance, shutdown, and emergency.

Appendix 3: Risk Assessment and Mitigation
The section should include a risk matrix or a list of risks with mitigation strategies. This can also include a failure mode and effects analysis (FMEA).

Appendix 4: Test Instrumentation and Hardware Specifications Sheets
A table of the test instruments, sensors, DAQ hardware, control computers, and other hardware used to conduct the test. The table should include the following categories:

  • Name
  • Manufacture
  • Model
  • Serial number
  • Owner
  • Power
  • Output signal
  • Mounting location
  • URL or other location for spec sheet, user manual, and supporting software


Appendix 5: Data/Channel List
A table of measurements and calculated data. The table should include the following categories:

  • Channel name
  • Channel description
  • Measured or calculated
  • Instrument/sensor source
  • Data rate
  • Data type
  • Unit
  • Destination file


Appendix 6: Electrical and Mechanical Drawings
All electrical and mechanical drawings needed to install and repair the test article, including:

  • Line drawings of the electrical systems
  • Line drawings of the measurement system
  • Assembly drawings

Importance of High-Quality Measurements and Data

Comprehensive, high-quality, and defensible measurements are the cornerstone of testing and are critical to delivering a successful test. Data produced from the testing measurements provide information that is key to understanding and characterizing the technology as well as guiding future design iterations. A successful measurement campaign begins with a detailed understanding of what needs to be measured, how to make those measurements, and how to process the data. This requires knowledge and hands-on experience with many concepts of measurement, including:

  • An intuition for expected device operation
  • A strong understanding of sensors and instrumentation
    • Signal conditioning
    • Protection and routing
    • Data processing
    • Similitude
    • Testing
    • Accepted practices and standards
    • Sensor operation and protection in marine environments


Without quality test data, technologies may move forward with insufficient feedback on the design. This can lead to design revisions at higher TRLs, thus increasing project costs and development timelines. Additionally, lack of quality data may require use of higher safety factors to account for uncertainty. In the worst case, poor quality data can lead to false conclusions and faulty designs that do not work as predicted or result in a failure or personal injury.

Figure 2 - The Importance of Data in MRE Development: The ability to collect and store quality data is critical at all stages of MRE research and technology development.

Pre-Testing Procedures and Readiness Verification

Testing components, subsystems and complete systems involves the integration of one or more test articles with the data acquisition system, the control system, and the test infrastructure. Many of these systems are new, untested or have been configured in a novel way which creates an elevated risk of failure. Even minor unintended design, fabrication or assembly errors can result in problems that surface during testing. For these reasons, it is essential to identify and correct any technical issues prior to testing. The following test readiness verification procedures are important to ensure that the test article is ready for testing.

1. Dry Sub-Assembly Test. All components should be individually tested for function prior to integration into subsystems. Similarly, once subsystems are assembled, they should also be tested for function prior to integration into the assembly. These steps are necessary to reduce the time cost and complexity of removing defective parts once the MEC is fully assembled. Ideally, for fully assembled MECs, components such as the power take-offs (PTOs) and blades will have been tested on a dynamometer and structural test stands, respectively.

2. DAQ/SCADA/Controller Function and Stability Test. Measurement and control systems use an integrated system of sensors, data acquisition hardware and complex software. Well prior to the test, the measurement and control systems should be assembled as completely as possible and operated for a lengthy duration (weeks to months) to verify function and stability. The following functions should be verified during this test: data acquisition, conversion, calculations and storage, control operation and stability, different states of operation, communication channels, emergency response, and security features. Potential issues that can cause instabilities in control systems and data streams include sample jitter, measurement delays, sensor drift, heat buildup, and noise. Other problems, such as memory leaks and software bugs, may require longer runs to be detected.

3. End-to-End Pre-Deployment System Test. Once the test article is assembled and integrated with all measurement and control systems, a set of comprehensive tests should be conducted to verify system readiness. These tests should attempt to simulate operation of the system on the test stand, test tank, or when deployed. Auxiliary systems, such as generators, may need to be brought in to power the test article. These tests should verify operational states, safety functions, electronics and sensor operation, and auxiliary systems. Additionally, all seals, hatches, and connectors should be checked prior to deployment to ensure water does not enter the dry spaces or short connectors. This can be completed by inspection and by either pressurizing or drawing a vacuum on the dry spaces to monitor the pressure. Identifying issues early can save the system from damage and prevent delays and unforeseen costs.

4. Wet System Test. Prior to deployment in a test tank or at sea, a short duration wet test is recommended, where the device is placed in the water to further verify seal integrity, stability, safety system function, and overall sensor and electronic operation. Larger devices can be tested at a shore side facility such as a dry dock, or at a test tank with sufficient depth. If possible, the test article should be powered to mimic operation and verify functionality and dynamic seals.

5. Test Readiness Review and Verification. Before proceeding to installation in a test tank, on a test stand or in the open water, a test readiness review and verification must be completed. This is a review by all stakeholders and third-party collaborators to ensure that the test article is ready for testing. The following tests and procedures should be reviewed and accepted: Dry Sub-Assembly Test, DAQ/SCADA/Controller Function and Stability Test, End-to-End Pre-Deployment System Test, Wet System Test, Open-Water Testing Plan, and Safe Operating Procedures (SOPs). This should also include a review of previous deficiencies and confirmation that they have been corrected, as well as conformity to other established procedures and compliance with permits allowing operation.

6. Initial Sea Trials. For open-waters tests, initial sea trials conducted in a benign environment are recommended prior to deployment. Sea trials are essential to prove seaworthiness prior to connecting long term fixings, such as moorings. This also provides an opportunity to verify any ballasting operations that need to be completed prior to connecting to a mooring, pile or other station keeping system.

Safety Considerations

MECs are multifaceted electro-mechanical machines that are deployed in energetic environments. The combination of moving components, electrical and hydraulic power, sea conditions, people, and sea life; among many other factors, make MECs a potential hazard to people, property, and the environment. Safety is paramount and the first step of developing a safe test is to perform a hazard identification (HAZID) for the early identifation of unsafe conditions, i.e., what could go wrong throughout the duration of the test including assembly, installation, operation, maintenance, and recovery. A few typical hazards include:

  • Low/medium and high voltage electricity (electric shock, arc flash)
  • Water (drowning, submergence)
  • Power tools
  • High pressure (hydraulics, compressed gas, cooling, and heat exchange systems)
  • Mechanical oscillating/rotating machinery
  • At-sea transfer
  • Flammable and toxic gases
  • Confined spaces
  • Cold/hot environment
  • Cranes, davits, capstans, tuggers, and other load handling equipment
  • Wildlife (jellyfish, sharks, urchins, barnacles)
  • Work from heights
  • Dangerous weather (lightning, hail, hurricanes)


Once all hazards are identified, a risk matrix is completed to assess the severity and likelihood of occurrence of each potential hazard (add reference for risk matrix). The final step is to develop mitigation and control measures. These include, in order of importance:

  • Elimination – remove the hazard
  • Substitution – determine a different way to perform an activity or process
  • Engineering Control – develop methods and equipment to protect the user from the hazard
  • Administrative – identify training requirements to establish qualified workers, develop safe operating plans to ensure that activities are carried out is the safest manner, and limit access.
  • PPE – determine the required personal protective equipment required for each task.


Elements of a Safe Operating Plan
Safe operating plans (SOPs) are essential elements that reduce the risk of injury and death when operating in potentially hazardous environments. SOPs define accepted procedures and equipment needed to safely complete frequently occurring tasks. SOPs need to contain a description of the activity, the location, general requirements, required training, acceptable conditions and restrictions, considerations, the working methodology and a rescue plan or emergency response plan. SOPs should be developed by a team to provide a breadth of insight. SOPs should have a review and approval process that has at least one level of independent review. These should be reviewed at least once a year and refined as knowledge and experience are gained through application. For infrequent or one-time tasks, safe working permits provide a more streamlined method of establishing a safe working practice. While less burdensome than SOPs, work permits still require definition of the task, the safe working methods, safety equipment and a rescue/emergency response plan. Approval is usually issued by a safety officer.

Risk Assessments

Risk assessments are recommended for every test to identify, analyze, access, and mitigate potential events that may negatively affect testing – i.e. what can go wrong that will prevent a successful test. DOE has developed a risk management framework[2] tailored for the MRE industry. A thorough risk assessment and development of risk mitigation plans prior to deployment is highly recommended to maximize the potential for success and minimize potential personnel, technical, environmental, and fiscal harm. Seven distinct categories of risk should be evaluated: safety, cost, time, scope, quality, environment, and regulation. Risk assessments should be performed throughout the lifecycle of the test, from the initial phases when the test concept is being developed, right through to the conclusion of the test. A risk register[3] should be used as a repository for current risk information to capture and record all risk information. The risk register should be treated as a living document that is updated regularly as new risks are identified and as existing risks are mitigated/closed. A brief overview of a risk assessment and consequence analysis is provided herein, along with an outline for a risk mitigation plan.

Risk Identification
Risk identification studies are used to identify the project risks that impact personnel, equipment, or the environment. These can be a result of a failure, an operation or intervention, unintended action, or external event (such as a collision with a vessel) that have a direct impact or can initiate a chain of events. These hazards can be identified from past operation of the device under test, or testing of similar devices, brainstorming what-if scenarios, Failure Modes and Effects Analysis (FMEA), among many other techniques. The references provided at the end of this section provide more detail. A risk breakdown structure is a useful tool to breakdown of all project risks into common categories.

Probability and Consequence Assessment
Probability assessment aims to determine the probability of an event or a sequence of events (identified in the risk identification) occurring leading to an event that will negatively impact a test. For electrical and mechanical events, quantitative estimates can be obtained by using data bases of historic failure rates (provided by manufacturer for example), fault tree analysis (FTA) and other methods for reliability analysis. When historic numbers are not available, fatigue analyses can be used, and best judgment is also acceptable when no other alternative is available. As part of this analysis, site specific data should be considered such as the occurrence and size of storms. In the absence of hard data, such as for some fiscal and environmental events, expert judgement may be used to estimate the probability of occurrence. The probability of occurrence of events are usually grouped into distinct bins, such as: 0 – p < 0.01%, 1 – 0.10 < p 0.1%, 2 – 0.1 < p < 1%, 3 – 1% < p <10%, 4 – 10% < p < 50%, and 5 – p > 50%. Consequence assessment quantifies the range of possible outcomes that may result from an event or a sequence of events identified in the HAZID. Consequences are typically evaluated from a financial loss, injury and loss of life, and environmental and property damage perspectives. This can be done from both qualitative and quantitative standpoints. Damages can often extend beyond the incident itself, impact reputation and may even affect the whole industry. For example, consider the BP Deepwater horizon and the impact to the offshore oil and gas industry. Consequences of events are usually grouped into distinct bins, such as 0 – None, 1 – Insignificant, 2- Marginal, 3 – Critical, 4 – Catastrophic, and 5 – Lethal with distinct definitions for safety, cost, time, scope, quality, environment, and regulation. The probability and consequence assessments are combined in a risk analysis matrix (RAM) with the frequency on the vertical axis and severity on the horizontal axis. As part of this analysis, levels of risk must be defined as acceptable and unacceptable or via a rating scale. Typically, the higher the risk, the more mitigation effort required. The highest risks often require redesign while moderate risks can be handled through more pragmatic measures, such as ceasing power production or some other controllable action.


Figure 2 - Caption: Description.

Lessons Learned

  1. Testing is an infrequent event that can be expensive and require significant time and resources. Therefore, a test plan should be approached from a combined research, development, and demonstration (RD&D) perspective with a view to maximize the knowledge gained. A slight increase in scope can often provide valuable data knowledge that can positively influence future design iterations.

  2. System integration, tuning, trouble shooting, and repair of test articles during the test period can delay testing, resulting in a reduced test scope. Often repairs are rushed and test personnel work long hours to get systems up and running, causing other errors that can cascade through the test. This can be especially problematic at sea where repair windows can be short and vessel costs can be prohibitive.

  3. In-water testing is a critical step in technology development of MEC systems, but it should only occur after a comprehensive laboratory and component testing program is completed. In the past, innovative MEC technologies have been pushed prematurely into full scale open-ocean testing and commercialization without the requisite comprehensive smaller scale and component testing. As a result, the designs were not fully vetted and experienced component failures that prevented operation. Laboratory, component, and integration testing provide the necessary risk reduction that can reduce the cost and duration of in-water testing and greatly increase the chances of success – those who skip steps are likely to perform them at later stages at increased cost and time. In the best cases, companies returned to perform the skipped tests. In the worst cases, companies went bankrupt

  4. Start planning for testing and measurement at the beginning of and during a design iteration, not just before a test. This will help tailor some of the design analysis to define the testing and measurement requirements.

  5. Testing often costs more than budgeted - set aside a fixed budget for measurement (hardware, sensors, instruments, installation, monitoring, and data management) and do not touch it to cover unexpected costs, instead reduce test scope. A full test with partial and/or low-quality measurements will not meet the goals of the test and will not provide sufficient knowledge to advance a design – often the test will need to be repeated at higher TRLs.

  6. Extracting the full knowledge from a test typically takes longer than planned – set aside sufficient budget and resources for data analysis and reporting at the end of a test to ensure that it is done well.