Antecedent moisture (AM) conditions, or the relative wetness or dryness of a system, can have a tremendous impact on rainfall-runoff dynamics. The magnitude of the runoff (flow) from rainfall can be affected by how wet the drainage area is from prior conditions. Wetter conditions can produce more runoff, and drier conditions can produce less runoff. The Antecedent Moisture Model (AMM) was developed to accurately simulate these dynamics. The AMM is a grey-box, continuous, deterministic, lumped hydrologic model of the rainfall-runoff process that was developed using the principles of system identification and parsimony. See the full paper on AMM for details on its development and equations and the companion spreadsheet for templates for building models. This article outlines the usage of the AMM for developing a calibrated, validated, predictive hydrologic model that is accurate over varying antecedent moisture conditions.
Developing and testing the model was done primarily from observations in the Midwest U.S. where both preceding rainfall and seasonal hydrologic conditions impact antecedent moisture dynamics. This article primarily focused on the process for developing an AMM for systems that exhibit those dynamics. In a temperate region, such as Southern California where seasonal effects are mild, the AMM can be simplified. This process is also outlined in this article.
The core steps of the process for using the AMM are listed below. Each step has many details and nuances that would be worthy of its own dedicated article or paper. This article provides an overview of each step as a quick reference for developing an AMM. Additional references or articles are noted in each section where they exist, and more will be developed over time and this article will be updated as they become available.
- Antecedent Moisture Effects on Rainfall-Runoff Systems (video).
- Overview of the AMM modeling process (article).
- AMM Model Development and Equations (paper).
- AMM Equations Companion Spreadsheet (Excel file)
- Tutorial Videos on AMM (index of videos).
Overview of Steps
- Collect flow and rain data
- Filter the diurnal pattern
- Review the rain events
- Accuracy of fit
- Long-term continuous simulation
- Frequency analysis
- Frequency validation
- Physical interpretation of model
1. Collect Flow and Rain Data
A model is only as good as the underlying data that it is based upon. Good data is critical in developing an accurate model. For simulating the impacts of AM conditions, it is necessary to collect a sufficient amount of data with AM dynamics present. For most systems, this usually means collecting data for at least a full cycle from spring to summer to represent the varying wetness conditions that occur during the seasonal cycle.
Sometimes it is necessary to collect several years of data if a single seasonal cycle is lacking in good antecedent moisture variations. This can occur from a general lack of rainfall, or if a very dry spring or wet summer occurs, which can mask the seasonal AM effects. Collecting multiple years of data is also useful for model validation. The data collected should be evaluated carefully for its purpose in developing a model, and if the data is insufficient, then more data should be collected, or the lack of good data should be considered when evaluating the usefulness of the model for design.
2. Filter the Diurnal Pattern
In separate and combined sewer systems, the flow data collected includes a diurnal pattern (or repeating 24-hour pattern) from the domestic sewage being transported in the system. An example of this diurnal pattern for a typical week is shown in the figure above. This diurnal pattern repeats regularly and needs to be separated from the flow signal to identify the inflow and infiltration (I&I) flows before modeling.
- What is diurnal flow? (article).
- How to create a basic diurnal pattern model (video).
- How to filter the diurnal pattern from the flow (video).
- How I&I flow is separated from the total flow (video).
- Inflow and Infiltration Analysis 101 (article).
3. Review the Rain Events
Flow and rainfall data can be affected by a myriad of phenomenon that can confound the ability to perform hydrologic modeling. These data should be carefully reviewed and prescreened to flag sections of flow or rainfall data that are not suitable for modeling. Properly screening out events with data issues, non-hydrologic processes, or other complicating factors is important to be able to successfully calibrate an AMM with a high degree of confidence and accuracy. Some of the items that are appropriate to consider when screening rain events include:
- Erroneous flow data – Scrub the flow metering data for any obvious errors in the data such as data drop outs, flat lines, meter drift or other anomalies. Scatter plots can be a good tool for this.
- Rain gauge failure – Examine the rainfall data for drastically mismatched events between rainfall and flow. These may be indicative of a rain gauge failure. Animals or insects can build nests or webs in the rain gauge, plant growth or tree cover can shield a rain gauge, high winds can prevent proper rainfall measurements, vandalism can damage a rain gauge, and many other items can cause a failure. Events with any issue like this need to be flagged and not used in the model development.
- Spatially varied rainfall – Events don’t necessarily have to be spatially uniform to be used in modeling. However, if the rain gauge is too far from the watershed, or the watershed is too large to be represented by a single rain gauge, then the impacts of spatial rain events will have to be considered. In the extreme case, events may have to be removed before developing a model. A good example of this is a short, local, intense burst of rainfall that can occur from a convective thunderstorm. Such a system can produce a large rain over a very small geographic area that can easily be missed by a rain gauge. NOAA’s NEXRAD radar system is a good resource for evaluating rainfall spatial variation.
- Data censoring – Data that is affected in some way, or censored, by components of the system will not be as useful. Storage facilities, system overflows, on-off pump stations, hydraulic restrictions, and diversions will all censor the data, making the peak flows recorded less useful. Flag events where censoring has occurred. Observed peak flows are usually not accurate in censored data and cannot be used in model development. However, depending on the nature of the censoring, total volumes or the rising limb of the hydrograph may still be useful.
- Non-hydrologic responses – Sometimes non-hydrologic flow responses (not rainfall-runoff processes) are present in the flow data during rainfall events. These should be identified and removed from consideration in developing the model. Examples might include industrial discharge processes, river inflow, or other non I&I related flows. River inflow will often present itself as a “square wave” in the data, which occurs when a sewer is adjacent to a river and the river level rises above some directly connected inflow point, producing a rather flat flow pattern. Such a square wave cannot be represented with rainfall-runoff models, and so these events must be removed from the data.
- Snow or frozen ground events – The AMM is a rainfall-runoff model and does not include the model dynamics to simulate the snow melt process. It is possible to attempt to convert snow melt into an equivalent precipitation, although this is rarely done due to budget considerations and the effectiveness of just modeling around these events. In that case, events affected by snow melt should be identified and removed. Prior to the ground thawing, runoff from rain events can be highly affected by the impacts of the frozen ground. While these dynamics can sometimes be represented by the temperature component of the AMM, it is common to identify when the ground thawed in the spring, and use events after this time period for model development. The thaw date can be estimated by examining air temperature records, or sometimes it can be identified in observed data by a small mounding in the ground water infiltration in the spring that is not associated with a corresponding rainfall event and is instead caused by the ground thawing.
- Other items – Sometimes during model development it can be very difficult to calibrate the model to one particular storm event that was not flagged due to any of the above items. It is possible that there are unknown issues affecting an isolated storm event that could not be easily identified. The modeler should keep this in mind when developing a model, so as not to bias the model by overfitting it to any one storm event that may have had undetected issues. This is a balancing act and a judgement that comes from building many models to decide when one storm is an anomaly and should be removed from modeling, or when the response is real and there are opportunities to learn about the system. Modelers should be open minded and consider both possibilities when developing models.
- Data scrubbing tool example (video).
- What is a scatter plot? (article).
- Scatter plot tool for reviewing flow metering data (video).
- NOAA’s overview of the NEXRAD radar tools.
Calibration is the process of tuning model parameters to match observed data until some desired accuracy criteria is met. Below is a short overview of the suggested steps to calibrate an AMM:
- Base Flow Model – Calibrate the base flow model (ground water infiltration) first. The base flow response can react slowly to rainfall and continue for days or weeks after a rain – see example plot below. The base flow can still be increasing while the I&I components are receding. Without identifying the base flow model first, it can cause errors in the I&I models by erroneously incorporated some of the base flow dynamics into the I&I models (usually the infiltration component).
- Summer Linear Model – Identify a summertime rain event under relatively dry conditions and use the linear rainfall-runoff model to identify the inflow and infiltration model components for that storm. Summer storms on dry conditions are usually dominated by the inflow component – see model component plots below. Calibrating the linear model to such a storm first can be useful for identifying the Shape Factor and Affine Constant for the inflow model.
- Spring Linear Model – Identify a spring rain event under relatively wet conditions and use the linear rainfall-runoff model to identify the inflow and infiltration model components for that storm. The infiltration component for spring storms is usually larger than for summer storm. Calibrating the linear model to such a storm next can be useful for identifying the Shape Factor and Affine Constant for the infiltration model. Comparing the Response Factors derived from the summer storm to the spring storm also provides insights into how important the AM effects are in the system, how much they are changing, and can provide an initial estimate of the Temperature Factors to the temperature model.
- Back-to-Back Storms – Identify a period with several back-to-back storms where AM conditions are increasing, and use them to calibrate the AM Retention Factor. Watch how the AM Retention Factor selected affects both the back-to-back storms and the stand-alone storms and identify a parameter that simultaneously matches both types of storms.
- Seasonal Impacts – Adjust the Temperature Factor to simultaneously match both wet spring storms and dry summer storms. The temperature factor transfer function has the effect of increasing the runoff response during cooler temperatures. Therefore, it is best to start this process with the linear model parameters identified from the summer storm above, and then scale up the Temperature Factor to match the flows in the cooler, wetter spring conditions. A good first guess for this scaling can be derived from the linear parameters for summer and spring storms described above. For temperate regions, such as Southern California, where seasonal effects are mild, this step can be skipped by setting the high and low temperature factors equal to each other.
- Iterate – Several iterations of the above steps are usually necessary to fine tune a model and understand how varying the parameters combine to affect the model output.
- Building a base flow model (video).
- Calibrating the linear model (video).
- The AM retention factor (video).
- The temperature factor (video).
- Overview of building an AMM (video).
Validation is the process of testing the predictive accuracy of a model using system observations that were not used to develop and calibrate the model. A strong model validation provides confidence in the model to perform its desired function – making predictions of unobserved conditions.
There are several methods that can be used to validate a model. A common method is to parse the system observations into two subsets. One subset is used for model calibration, without any consideration or use of the other data during the calibration. The second subset is used to validate the model without any further adjustments of the original model. A true predictive model will be capable of accurately simulating the validation data without any further adjustments to the model parameters.
The data can be split into these subsets by selecting a break point date in the system observations, or by identifying certain storms in the data set to use for validation. Alternating every other storm is one example of this. Selecting a large storm that approximates a design event is another example. Because the AMM is a continuous model, it is easier to select a break point date and perform the calibration and validation using continuous data sets. Ideally, the AMM would be calibrated to one or two years of observations, and then validated using another year. Showing that the model can accurately predict a full year of system observations bolsters confidence in the model. Later in this article, I will describe a methodology of validating a model that uses frequency performance.
Use of Validation Data for Re-Calibration
A question that often arises during validation is whether it is acceptable to adjust the model parameters after validation to reflect the information in the validation data set? Is that considered “cheating” in the validation? The answer depends on how it is done and disclosed. If the modeler is using the validation data set to select model calibration parameters for the express purpose of improving the validation performance, then this does in fact undermine the purpose and utility of the validation process. However, if new insights are learned during the validation process, such as identifying a new dynamic that is present in the system that was not obvious during calibration, then it is perfectly valid to use this new insight to develop a better model with the validation data that reflects this dynamic. If possible, it would be ideal to then re-validate this new model with additional validation data. If this is not possible, because all of the data has been used, then full disclosure is appropriate of the methods used to develop the model, as is showing the validation data set fits both before and after the model was updated to reflect the validation data. Such insights and full disclosure can be useful later to modelers when the model is used for design, predictive applications, and when additional data becomes available to re-validate. Purposely representing the validation performance with a model that used the validation data to identify the parameters would not be ethical in my opinion, and would in fact be “cheating” in the validation.
Validating Event Models
Although not relevant to the process of validating an AMM, I am including a quick note on this topic because I see so much confusion around it. An event model is one in which the model parameters are adjusted for each individual storm event in the data set to match the observations for that event. This results in a table of parameters that represent the model, with one row in the table for each storm event. Final model parameter for design are then selected using this table by averaging the parameter values or some other means.
For systems that do not exhibit AM effects, this may be a perfectly valid way to develop a model, as the parameters are not expected to change much from storm to storm. But for systems that exhibit AM effects, such a model is not predictive and therefore cannot be validated, because the parameters are changing in time as a result of changing AM conditions. The modeler must specify the parameters to use for each storm event modeled, using system observations to select the parameters, so this is not a predictive model. Event modeling is a reasonable method to build models for certain objectives, such as selecting certain model parameters and design rainfalls to understand system performance under those conditions. The lack of predictive capabilities for event models may not be important if the modeler is unconcerned with understanding the frequency of such conditions occurring.
However, some modelers may attempt to “validate” such a model by using a measure of the validation observations, such as the event volume, to guide their parameter selections for the validation events. It should be obvious that this process is not validation, because the validation observations are being directly used to adjust the model. To illustrate the fatuousness of this method, consider that the resulting volumes generated from the model during validation would match the observed volumes exactly, because the observed volumes were used to determine the parameter selection during validation. This would result in all of the validation model fits having a 0% volume error. Such a model performance should be an automatic red flag when reviewing validation results. At the end of this “validation” process the modeler will then use parameters for design simulation that they selected through some means outside the calibration / validation process, such as parameter averaging, highest parameters, or modeler judgement. This approach for validating a model is concerning for systems that exhibit AM effects, because it presents a false sense of accuracy in the event model that simply does not exist.
Event modeling is a valid approach for modeling systems for certain applications. However, for system that exhibit AM effects, event models are limited because they are not predictive and cannot be validated, and they should not be represented as such.
6. Accuracy of Fits
Model performance for calibration and validation can be quantified using an accuracy of fit (AOF) methodology. There are many methods for quantifying model performance available. The method selected should be matched with the objective of the model, so that the model performance is being measured against the system metrics that will be most used. When using a continuous model like AMM for system design, peak flows and volumes are important outputs that will be used in design. Therefore, this section outlines an AOF methodology for quantifying those metrics.
Because AMM is a continuous model, accuracy of fit should be quantified over many different storm events under varying AM conditions. Below is an outline of a suggested process for computing AOF metrics:
- Identify key storms – Identify the key storms in the data set for computing AOF metrics. Sometimes these are the largest storms, but they should also include a good variety of storm sizes and AM conditions, so that the AM metrics reflect a good variation for calibrating and validating the model.
- Compute storm AOF metrics – For each storm, tabulate the observed and modeled values for the metrics of interest such as peak flow and volume. Compute the model error for each storm by computing the percent difference between model and observed with the following formula: Error = (Model – Observed) / Observed. This will result in a convention of the percent error being computed as a deviation from the observed measurement, with positive errors indicating the model over-predicts and negative errors indicating the model under-predicts. See examples of these computations in the tables below.
- Compute net model error – Net model error is the average of the model errors across all selected storms for a given parameter (such as volume or peak flow). No model is perfect, and some storms will be over-predicted and some will be under-predicted with a continuous model. Computing net model error allows those positive and negative values to cancel each other out, and thus indicates whether there is model bias for over- or under-prediction. See examples of these computations in the tables below.
- Compute total model error – Total model error is the average of the absolute value of the model errors across all selected storms for a given parameter (such as volume or peak flow). Computing total model error eliminates the cancellation of positive and negative model errors, and thus indicates the average predictive accuracy of the model for simulating any given storm. Unless the model is perfect, it will not be possible to achieve a zero total model error through calibration, and thus the total model error is the key metric to assess the underlying model accuracy. (As opposed to net model error, in which a skilled model calibrator will often be able to achieve a zero error). See examples of these computations in the tables below.
- Evaluate model performance – It should be possible through adjusting model parameters to achieve a near zero net model error during calibration, and errors in the range of 1-3% are acceptable. Sometimes the balancing of errors between peaks and volumes from the inflow and infiltration components can create challenges in achieving a zero net error. In that case, the modeler should lean towards a positive net model error (over-prediction) to be conservative. Acceptable values for total model error depend on the severity of the AM effects present in the system being simulated. For example, many systems exhibit a 10x (or a 1,000%) variation in capture percentage or more (i.e. the percentage of rainfall captured by the system). In such a system, a total model error of less than 20% is a good model performance, and less than 10% would is excellent. For lesser or greater variations due to AM effects, the criteria for a good total model error could be adjusted accordingly.
- Re-calibrate the model if needed – If the model AOF performance metrics fall outside of the criteria described above (near zero net error and total error of less than 10-20% for typical separate sanitary systems), it is probably necessary to continue to calibrate the model until these performance measures are met.
In the above AOF example for peak flow (top table), the model is biased to over-predict slightly with a net error of +4.1%. This suggests that the model should be recalibrated to reduce the net peak error to near zero, which would also improve the total peak error. The AOF example for volume (lower table) is excellent with a zero net error, indicating a very low model bias, and an excellent total error (7.1%), indicating a very high predictive accuracy. Also note the tremendous variation in observed volumes, from a high value of 728 kcf to a low value of 37 kcf, indicating a volume variation of 19.7x or 1,970%. An average predictive accuracy of 7.1% for total model volume error for a system with this much volume variation is an excellent model performance.
However, it is likely that re-calibrating the model to decrease the net peak error slightly (in the top table) would also cause a reduction in the model volume, which would then cause the volume net error (in the bottom table) to become negative and biased to under-prediction. This is a common complexity in simultaneously calibrating to peak and volume errors. In this case, the solution involves adjusting the inflow and infiltration models in opposite directions to achieve a good peak and volume calibration simultaneously with very little model bias (low net errors).
Peak flows are often dominated by the inflow component, and volumes are often dominated by the infiltration component. By simultaneously adjusting the inflow model upwards, and the infiltration model downward, it should be possible to achieve a near zero net error simultaneously for both peak and volume. Doing so will also improve the total errors, minimizing them to the greatest extent possible. This iterative process for calibrating a model using AOF metrics helps converge the modeling process to identify the best model possible during calibration.
7. Long Term Continuous Simulation
Challenges with the Design Storm
A common process of using a hydrologic modeling for design, is to select a design storm (like a 25-year, 24-hour storm) and to simulate that design storm using the model. For systems with large AM effects, that method has several challenges. These include selecting what wetness conditions to use for the design storm, understanding the frequency of such design conditions occurring, and assessing the risk of the resulting design flows being exceeded. Resolving these issues typically requires a great deal of human judgement, which is unsatisfying in a scientific numerical exercise like modeling.
Long-Term Simulation with Frequency Analysis
The solution to this dilemma is to move away from the common design-storm approach, and instead employ a frequency analysis using a long-term continuous simulation. This process eliminates the need to select design wetness conditions, provides a solid understanding of the frequency of various design flows, and allows for assessment of the risks of greater flows occurring than that selected for design. It also has the effect of removing the human judgements around design decisions from the modeler, and instead places the decisions of balancing risk and costs in the hands of the project stakeholders and system owners.
The first step of this process is to run a long-term continuous simulation using the calibrated and validated AMM. This is done by obtaining a long-term climatological data set of precipitation and air temperature for use in the modeling. Long-term data sets of over 50-years are usually readily available from a nearby National Weather Service (NWS) station. Such a long-term data set contains a wide variety of hydrologic conditions that occurred, including large design-type storms, smaller storms in very wet conditions, back-to-back storms, drought conditions, prolonged wet periods, and other interesting phenomenon that are likely to occur over a long period of time.
This long-term climatological data set is run through the AMM to develop a long-term flow prediction that can then be mined for frequency performance information (described in the next step). By routing this data set through a model like the AMM, that accounts for AM effects, the variations of precipitation and antecedent moisture conditions that occur over time are reflected in the output. This eliminates the problems described above with the design-storm approach.
A Few Notes on This Methodology
- Station Selection – The long-term data set need not be obtained from a weather station that is within the watershed being modeled. It only has to be located within a similar climatological region. Usually a weather station at the closest major airport will suffice. A review of the climatic sections contained in the Rainfall Frequency Atlas of the Midwest by Huff and Angel (page 4) provides a good indication of the geographic size of a similar climatological region in the Midwest. NOAA Atlas 14 could be reviewed for other locations around the U.S. to assess regional rainfall variations when selecting a station.
- What is Being Simulated – The methodology is not literally simulating the flows that occurred historically in the system. The system probably did not even exist in its present form as far back as the long-term data set goes. Therefore, the output from the long-term simulation is not expected to match the specific performance of historic events. That is what calibration and validation is for. The station selected only has to be within a similar climatological region, to reflect the frequency content of rainfall, temperature and the resulting AM effects that occurred over a long-period of time. That is why it is not important for the long-term data to be obtained from a station very close to the watershed.
- Climate Change – The impact of climate change can be assessed with this method by making the appropriate modifications to the long-term data to reflect the anticipated impacts from climate change. For example, the temperature or rainfall values can be adjusted upwards or downwards in the long-term data set as desired before routing the data thought the AMM. This will generate an AMM prediction of flows that are likely to occur with the climate change adjustments made. For example, EPA’s National Stormwater Calculator contains tools that can estimate the impacts from climate change on rainfall on a month-by-month basis. These monthly changes can be applied to the long-term data set to assess the impacts of these changes on the design flows from the system.
8. Frequency Analysis
Once the long-term continuous simulation is complete, the output from the model can then be mined for frequency performance information. The Log Pearson III (LP3) probability distribution can be used for this purpose. The LP3 function is well suited for fitting the frequency performance from hydrologic systems and is the same process that is used by FEMA and other federal agencies to establish flood plains from long-term data sets.
In this methodology an annual maxima series is developed from the long-term simulation. This is done by making a table of the highest flow from each year of the simulation. These flows are then arranging them from highest to lowest. An exceedance probability is then assigned to each flow using a plotting position formula, such as the Weibull plotting position, p = x / (n+1), where n = the number of years in the annual maxima table and x is the plotting position rank for that year. A plot can then be made of flow versus the exceedance probability, and this plot can be used to assess the exceedance probability, or the percent chance of any flow occurring or being exceeded, in any given year.
This is done by fitting the LP3 distribution to the data points in the plot. The LP3 distribution plot can then be used to interpolate between data points or extrapolate beyond the end of the points in the plot. For example, if a 50-year continuous simulation was used, and the desire is to determine the 10-year recurrence interval flow, then LP3 plot can be used to interpolate between points in the plot and estimate the 10-year flow. Whereas, if it is desired to estimate the 100-year interval flow from that same 50-year simulation, then the LP3 distribution must be used to extrapolate to the 100-year flow. A plot illustrating the process is shown below and a sample spreadsheet for making the computations is provided in a link here and at the end of this section. When modeling a system where a diurnal pattern was filtered from the total flow, it is necessary to add the average value of this filtered flow back into the output of the frequency analysis before using it for design.
The reciprocal of the exceedance probability represents the average return period of the event. For example, an exceedance probability of 0.04 represents a 25-year recurrence interval. That is to say, over a long-period of time, that peak flow would have an average of 25-years between occurrences. Sometimes the intervals between that peak flow would be more, and sometimes it would be less, but the average recurrence interval is 25-years.
With this approach, the modeler can now share with the decision makers the relationship between risks and costs. The x-axis of flow (or other metric like volume) is the primary driver of the cost of improvements. The y-axis or probability can be used to understand the risk of design decisions. This provides decision makers with a strong basis for selecting which costs and risks they are most comfortable with. This contrasts with traditional techniques in which the modeler alone incorporates these risk decisions into the model calibration by selecting specific wetness conditions for the design event.
This process can also be used to simulate the frequency of overflows or volumes required for designing a storage facility. In those cases, the x-axis is just replaced with those metrics mined from the long-term continuous model. For example, for sizing a storage facility, the highest storage volume required in each year is determined by computing the volume over a certain capacity threshold for each large storm event. The resulting annual maxima storage volume series is then used in the frequency analysis methodology outlined above.
- Example Log Pearson III Frequency Analysis Spreadsheet (Spreadsheet)
- Frequency Analysis Process and Tools Overview (video)
- USGS Guidelines for Determining Flood Flow Frequency (Bulletin 17C)
9. Frequency Validation
Once the frequency analysis is performed, it may be possible to perform a higher-level validation of the model performance. This can be done by using long-term performance observations from the system such as sanitary sewer overflow (SSO) frequency, long-term observed peak flows from the system, volumes of filling events at storage facilities, or other long-term records that can be compared against the modeled frequency analysis. This will further validate the performance of the model.
An example of this process is shown in the plot below. A long-term continuous simulation and frequency analysis was performed using the model with a 50-year climatological data set (black points). This system also had an eleven-year record of observed flows, from which an observed peak flow frequency analysis was performed (red points). The strong overlap between the model and observed frequency plots provided a secondary, higher-level frequency validation of the model. Such a validation can provide additional confidence in the model results, especially for the larger, infrequent events, like the 10-year recurrence interval flow.
10. Physical interpretation of model
Once an AMM is developed, physical interpretation of the model structure and parameters is possible because the AMM is a grey-box model with model components that relate to physical processes in the system. These physical insights can be useful for design, making rehabilitation decisions for a system, and cross-comparing systems.
For example, it is often necessary to model the flow components of base sanitary flow, ground water infiltration, inflow and infiltration separately, and add them together to derive the total system flow, as shown in the model block diagram above. The relative magnitude of the components provides engineers with insights into the relative contributions of these flow components to system peak flows and volumes. This information can be used by engineers to match system improvements and source removal techniques to the specific flow components present, and to estimate the impact on the system hydrograph from these improvements. For example, a system with a very high inflow model component may be a target for finding and removing directly connected impervious areas, whereas a system with a very high ground water infiltration component may be a target for repairing pipeline defects. Estimates of the impacts of these improvements can be made by adjusting the corresponding model component, and updating the long-term simulation and frequency analysis.
The linear portion of the model can be represented as a scaled unit hydrograph, and the AM Retention function of the model computes how the unit hydrograph is changing over time in response to changing AM conditions. This means that the model simulates the continuously varying capture percentage (i.e. the percentage of rainfall captured by the system), represented as a continuously varying Response Factor. Understanding the magnitude and continuous changes to the capture percentage provides insights into the severity of the defects of the system and allows for cross-comparisons to other systems.
AMM allows for cross-comparing of systems by simulating different systems under the same AM conditions and comparing their responses. This enables system benchmarking that may not be possible with only system observations, if those observations did not occur under the same AM conditions. For example, system performance comparisons can be made using flow data collected from “System A” during the year 2015 and “System B” during 2020, even if these observations occurred under different AM conditions. This is done using an AMM calibrated to both systems, and then running both models under the same AM conditions to compare the systems under the same simulated conditions.
The AMM is an advanced, powerful and accurate tool for simulating the rainfall-runoff process from systems that exhibit AM conditions. The AMM is a non-linear, lumped, conceptual, deterministic, grey-box model developed from the principles of system identification and the parsimony principle. The model contains relatively few parameters, which has several advantages in modeling, including ease of use, fewer parameters to calibrate, ability to quickly identify optimal parameters, and ease to represent in a numerical computer routine, and the ability to make accurate predictions of unobserved or design conditions, which is a critical function of a model. Physical interpretation of the model structure and parameters is possible, providing the modeler with useful insights into the physical processes driving the rainfall-runoff dynamics. The linear portion of the model, which has the same form as an instantaneous unit hydrograph, is useful for simulating rainfall-runoff from any system for a single event, or for systems that don’t experience significant AM effects.
The H2Ometrics website contains a vast library of tools to aid in the development of an AMM. These resources include numerous papers, articles, videos and spreadsheets to aid in learning about and developing an AMM. The H2Ometrics platform contains many tools that were specifically developed to aid in the development of AMMs such as data scrubbing tools, diurnal filter tools, I&I computations, I&I metrics, base flow models, AMM model development tools, accuracy of fit metrics, and frequency analysis tools. Contact us for more information.
Copyright © 2020 by H2Ometrics