More

Error inserting new column in ArcSDE table (ERROR 999999)

Error inserting new column in ArcSDE table (ERROR 999999)


I am using a python script to populate a table with values I get from rasters files. I use arcpy.AddField_management to add a new column and arcpy.CalculateField_management to insert the values for the new column. The table has more than 450 columns and after executing arcpy.CalculateField_management I get the following error:

ExecuteError: ERROR 999999: Error executing function. Underlying DBMS error [[Microsoft][SQL Server Native Client 11.0][SQL Server]Cannot create a row of size 8074 which is greater than the allowable maximum row size of 8060.] Failed to execute (CalculateField).

I can avoid this error executing in SQL Server:

ALTER TABLE REBUILD

Is there anyway to execute this SQL server instruction from the script or another equivalent way to avoid the error?


How can I figure out the root cause of &ldquoOut of range value for column&rdquo error in MySQL

I have a console app written with C# on the top of .Net Core 2.2 framework with EntityFrameworkCore as ORM. My application runs various tasks that run every 15 minutes. Each task calls an external API server and updates my internal MySQL database (MySQL v8.16.) If the job fails for any reason, I get an email with the error.

I got the error email with the following error

Error Message: Out of range value for column 'Value' at row 1

I am trying to figure out what Value column is causing this error and trying to figure out what query has caused this issue.

From the look of it it, MySQL does not log the invalid queries anywhere which sends me on a wild search for the problem column.

I looked up all columns that are named 'Value` in my database using the following query

The above query gave me the following results

I think if the error was caused by a column with the type of varchar/longtext I would get "Text truncated" error not "Out of range value". With that in mind, I am left with the following columns as a suspect

Furthermore, since my application is written in C# and EntityFrameworkCore is used as my ORM to interact with the database, I feel confident that both boolean_fields.Value and time_fields.Value are not causing the issue. My conclusion is because the property type is DataTime? in the TimeField entity-model where the property type is bool for my BooleanField entity-model.

With the above assumption in mind, I am down to one field that could be causing my problem.

I store all numeric values in to numeric_fields.Value column (ie, int, decimal and long)

The API is expected to return numeric data using the following

I should be able to store all the above numeric in decimal(30,6) column with no problem.


How can I figure out the root cause of &ldquoOut of range value for column&rdquo error in MySQL

I have a console app written with C# on the top of .Net Core 2.2 framework with EntityFrameworkCore as ORM. My application runs various tasks that run every 15 minutes. Each task calls an external API server and updates my internal MySQL database (MySQL v8.16.) If the job fails for any reason, I get an email with the error.

I got the error email with the following error

Error Message: Out of range value for column 'Value' at row 1

I am trying to figure out what Value column is causing this error and trying to figure out what query has caused this issue.

From the look of it it, MySQL does not log the invalid queries anywhere which sends me on a wild search for the problem column.

I looked up all columns that are named 'Value` in my database using the following query

The above query gave me the following results

I think if the error was caused by a column with the type of varchar/longtext I would get "Text truncated" error not "Out of range value". With that in mind, I am left with the following columns as a suspect

Furthermore, since my application is written in C# and EntityFrameworkCore is used as my ORM to interact with the database, I feel confident that both boolean_fields.Value and time_fields.Value are not causing the issue. My conclusion is because the property type is DataTime? in the TimeField entity-model where the property type is bool for my BooleanField entity-model.

With the above assumption in mind, I am down to one field that could be causing my problem.

I store all numeric values in to numeric_fields.Value column (ie, int, decimal and long)

The API is expected to return numeric data using the following

I should be able to store all the above numeric in decimal(30,6) column with no problem.


Types of feature classes

Vector features (geographic objects with vector geometry) are versatile and frequently used geographic data types, well suited for representing features with discrete boundaries, such as streets, states, and parcels. A feature is an object that stores its geographic representation, which is typically a point, line, or polygon, as one of its properties (or fields) in the row. In ArcGIS, feature classes are homogeneous collections of features with a common spatial representation and set of attributes stored in a database table, for example, a line feature class for representing road centerlines.

When creating a feature class, you'll be asked to set the type of features to define the type of feature class (point, line, polygon, and so forth).

Generally, feature classes are thematic collections of points, lines, or polygons, but there are seven feature class types. The first three are supported in databases and geodatabases. The last four are only supported in geodatabases.

  • Points: Features that are too small to represent as lines or polygons as well as point locations (such as GPS observations).
  • Lines:Represent the shape and location of geographic objects, such as street centerlines and streams, too narrow to depict as areas. Lines are also used to represent features that have length but no area, such as contour lines and boundaries.
  • Polygons: A set of many-sided area features that represents the shape and location of homogeneous feature types such as states, counties, parcels, soil types, and land-use zones.
  • Annotation: Map text including properties for how the text is rendered. For example, in addition to the text string of each annotation, other properties are included such as the shape points for placing the text, its font and point size, and other display properties. Annotation can also be feature linked and can contain subclasses.
  • Dimensions: A special kind of annotation that shows specific lengths or distances, for example, to indicate the length of a side of a building or land parcel boundary or the distance between two features. Dimensions are heavily used in design, engineering, and facilities applications for GIS.
  • Multipoints: Features that are composed of more than one point. Multipoints are often used to manage arrays of very large point collections, such as lidar point clusters, which can contain literally billions of points. Using a single row for such point geometry is not feasible. Clustering these into multipoint rows enables the geodatabase to handle massive point sets.
  • Multipatches: A 3D geometry used to represent the outer surface, or shell, of features that occupy a discrete area or volume in three-dimensional space. Multipatches comprise planar 3D rings and triangles that are used in combination to model a three-dimensional shell. Multipatches can be used to represent anything from simple objects, such as spheres and cubes, to complex objects, such as iso-surfaces and buildings.


2 Answers 2

I would suggest using Javascript Remoting for this which gives you the benefit of a more light-weight solution, and conforming your <apex:pageBlockTable> to Salesforce's list views. For example:

Visualforce

You can pass parameters inside apex:CommandButton . So, as you emit the rows in your table, you can have a child element for your apex:commandButton , which is an apex:param , which will call a setter on your controller prior to processing the action of your apex:commandButton .

For example, you could use the id value of your iterator, que, as the identifier, instead of trying to do the rowCount thing. Modify removeDesiredRow() to expect que.id in a rowToRemove variable, instead of the rowCount .

Simply add some rerender behavior to the apex:commandButton , to keep your data table up to date. You can also get fancy and use <apex:actionSupport> and <apex:actionRegion> if you want an AJAX solution without messing with JavaScript manually.


MySQL Connector/Python 8.0.24 has been released

MySQL Connector/Python 8.0.24 is the latest GA release version of the
MySQL Connector Python 8.0 series. The X DevAPI enables application
developers to write code that combines the strengths of the relational
and document models using a modern, NoSQL-like syntax that does not
assume previous experience writing traditional SQL.

To learn more about how to write applications using the X DevAPI, see

For more information about how the X DevAPI is implemented in MySQL
Connector/Python, and its usage, see

Please note that the X DevAPI requires at least MySQL Server version 8.0
or higher with the X Plugin enabled. For general documentation about how
to get started using MySQL as a document store, see

To download MySQL Connector/Python 8.0.24, see the “General Availability
(GA) Releases” tab at

Changes in MySQL Connector/Python 8.0.24 (2021-04-20, General Availability)

Functionality Added or Changed

  • Removed Python 2.7 and 3.5 support, and added Python 3.9
    support. (Bug #32144255, Bug #32192619, Bug #32001787)
  • Improved server disconnection handling of X Protocol
    connections now creates a log entry and returns an error
    message, as needed, after Connector/Python receives a
    connection-close notice from the server. Connector/Python
    detects three new types of warning notices.
    Connection idle notice. This notice applies to a server
    connection that remains idle for longer than the relevant
    timeout setting. Connector/Python closes the connection
    when it receives the notice in an active session or while
    a new session is being created. An attempt to use the
    invalid session returns the “Connection closed. Reason:
    connection idle too long” error message.
    Server shutdown notice. If a connection-close notice is
    received in a session as a result of a server shutdown,
    Connector/Python terminates the session with the
    “Connection closed. Reason: server shutdown” error
    message. All other sessions that are connected to the
    same endpoint are removed from the pool, if connection
    pooling is used.
    Connection killed notice. If the connection being killed
    from another client session, Connector/Python closes the
    connection when it receives the notice in an active
    session or while a new session is being created. An
    attempt to use the invalid session returns the
    “Connection closed. Reason: connection killed by a
    different session” error message.
  • If a classic MySQL protocol connection experiences a
    server timeout, Connector/Python now reports more precise
    disconnection information from the server.
  • For the C-extension, executing prepared statements
    emitted errors when placeholders were defined without
    associated parameters. Now they are not executed. (Bug 32497631)
  • For prepared statements any type or argument was
    accepted, which could produce undesired results. Now the
    use of list or type objects for the argument is enforced,
    and passing in other types raise an error. (Bug 32496788)
  • Added Django 3.2 support while preserving compatibility
    with Django 2.2, 3.0, and 3.1. (Bug #32435181)
  • Added context manager support for pooled connections a
    feature added to standard connections in 8.0.21. (Bug 32029891)
  • Replaced the deprecated PyUnicode_GetSize with
    PyUnicode_GET_LENGTH to fix the casting of Python’s
    unicode to std::string. (Bug #31490101, Bug #99866)
  • Binary columns were returned as strings instead of
    ‘bytes’ or ‘bytearray’. (Bug #30416704, Bug #97177)


Enjoy and thanks for the support!

On Behalf of the MySQL Engineering Team,
Balasubramanian Kandasamy


GIS Data Administration

A variety of data management and deployment architecture strategies are available today to improve data access and dissemination throughout the rapidly expanding GIS user community. The volume of data you must sort through each day is growing exponentially. How you manage, organize, and control these data resources is critical to system performance and scalability.

This section will first show how to modify the CPT Platform Capacity Calculator workflow configuration to demonstrate the performance impact your data source format selections.

Finally, this section will demonstrate how to configure the CPT for an Imagery workflow.

Modifying the CPT Platform Capacity Calculator workflow configuration

The CPT Platform Capacity Calculator is a simple tool for evaluating selected platform capacity. The default tool, located at the bottom of the CPT Hardware tab, includes a variety of standard workflows that demonstrate platform capacity. For analysis and reporting purposes, you may want to change the default list of sample workflows and include those workflows you are evaluating in your own design environment. This section describes how you can change the Platform Capacity Calculator workflow samples to a custom set of workflows for demonstration purposes.

ArcGIS for Server mapping service data source performance comparison

The CPT provides six data source format selections for vector workflows. These selections include SDE_DBMS, Small File GDB, Large File GDB, Small ShapeFile, Medium Shapefile, and Large Shapefile. The CPT Platform Capacity Calculator default configuration includes a total of 5 workflows.

Figure A-5.2 shows the location of the selected workflows displayed on the Platform Capacity Calculator. These workflows are located in column A directly behind the Platform Capacity chart. To access these workflows, use your mouse to select and drag the Platform Capacity chart to a location below the workflow selection list. The workflow selection list is a group of white cells in column A below the platform selection cell, normally located behind the Platform Capacity chart.

Adjusting the workflow display on the CPT Platform Calculator:

  • Select and slide the Platform Capacity chart below the workflow selection list.
  • Select the desired workflow list.
    • Dropdown list shows available workflow selections from the CPT Workflow tab.
    • Medium complexity workflow must be selected for a proper analysis and display.
    • Workflow name displayed on the Platform Capacity graph is shown in column B.

    Figure A-5.3 shows where you can select the data source format for each workflow. The data source for each workflow can be selected in column I (same row as the selected workflow). For this example, the AGS102 REST MSD R 100%Dyn Med 10x7 JPEG workflow was selected to demonstrate variation in performance between the available GIS vector data source selections. There are six different vector data source formats included in the CPT, so a total of six workflow rows were included in the demonstration. A separate data source was selected for each workflow in column I.

    Once the workflows and data source selections are made, you can replace the Platform Capacity chart over the workflow selection list for the final analysis and display. When you select a platform configuration in column A, the Platform Capacity chart will show a peak platform throughput range for each of the selected workflows. The Platform Capacity chart shows 80 percent throughput estimates for both medium and light complexity workflow configurations.

    Figure A-5.4 shows the modified custom Platform Capacity Calculator results. Selected platform configuration is the Xeon E5-1280v2 4 core (1 chip) 3600 MHz server. AGS102 REST MSD R 100%Dyn Med 10x7 JPEG peak throughput varies from 22,200 TPH to 166,200 TPH depending on the display complexity and the selected data source.

    • Workflow recipe and data source are displayed for each result on the Y-axis.
    • Medium and light platform capacity is shown for each workflow based on estimated 80 percent peak throughput.

    ArcGIS for Server Imagery service data source performance comparison

    The CPT provides seven file based data source format selections for Imagery workflows. These selections include TIFF uncompressed, TIFFLZW compression, TIFFJPG compression, JPG2000, MRSID, ECW, and IMG ERDAS. The CPT Platform Capacity Calculator workflow configuration can be adjusted to compare selected platform throughput capacity for the seven Imagery workflows.

    Figure A-5.5 provides a view of the CPT Platform Capacity Calculator configured to show performance of the seven (7) available Imagery data source formats. The AGS102 Imagery MosaicDS R 100%Dyn recipe is used to represent the imagery workflow. The platform selection is the Xeon E5-1280v2 4 core (1 chip) 3600 MHz server configuration. The platform capacity output ranges from 15,800 TPH to 180,200 TPH based on selected data source and medium to light workflow complexity.

    Procedure for adjusting the custom workflow display on the CPT Platform Capacity Calculator.

    • Select and slide the Platform Capacity chart below the workflow selection list.
    • Select the desired workflow list (AGS102 Imagery MosaicDS R 100%Dyn Med 10x7 JPEG).
    • Expand the workflow list by using the copy row and insert copied cells commands.
    • Complete your desired workflow selection in column A (seven rows with same imagery workflow)
    • Select the data source for each workflow in column I.
    • Replace the Platform Capacity chart over the workflow selection list.
    • The workflow recipe and data source are displayed for each result on the Y axis.
    • Medium and Light platform capacity is shown for each workflow.

    Selecting an imagery workflow on the CPT Calculator tab

    Figure A-5.6 shows how to select an Imagery software pattern on the CPT Calculator tab.

    • Imagery, Density, Platform Architecture, and Data Source have unique selection lists and will show red if the selection is invalid. *Update software technology performance factors to complete imagery workflow definition.

    Figure A-5.7 shows the Imagery dataset manager selection. Select the imagery dataset manager (MosaicDS or RasterDS) that will be used for workflow imagery data access.

    Figure A-5.8 shows the Density selection. For all imagery workflows the Density selection must be Raster.

    Figure A-5.9 shows where to identify the mosaic dataset location. Select the location of the mosaic dataset (DBMS or FGDB).

    Figure A-5.10 shows the imagery data source format selection. The list of available imagery data formats are provided as a dropdown menu when you select an imagery workflow. Select the data source format planned for the imagery workflow.

    Once the imagery workflow configuration is complete, the calculator completes the workflow sizing analysis and shows the resulting platform solution. You can then include the configured imagery workflow in your project workflows on the CPT Workflow tab for access when completing your design.

    Selecting an imagery workflow on the CPT Design tab

    Figure A-5.11 shows the CPT Design tab, highlighting the data source selection for an imagery workflow.

    • Select the imagery workflow in the proper user location in column B.
    • Imagery workflows have a unique data source format selection list in System Configuration (column R).
    • The workflow cell in column I will show red when an invalid data source format is selection.

    The SDE selection in the CPT Design Software Configuration Module identifies the location for each workflow mosaic dataset. Proper selections are either the DBMS platform or a file geodatabase (FGDB).

    Once you have configured the imagery workflow, you can view the design solution in Figure A-5.12. In this configuration, the GIS server platform is an E5-1280v2 4 core (2 chip) 3600 MHz server configuration hosting the AGS102 Imagery MosaicDS R 100%Dyn Med 10x7 JPEG workflow accessing a TIFF imagery data source. System loads are 75,000 TPH. The GIS Server is supporting these loads at a server utilization of 54.2 percent. Peak throughput loads for this workflow deployed on this server configuration is 138,000 TPH.

    CPT Video: GIS data source


    Configure Display settings

    The Display screen shows settings that affect the way the user interface appears to the end user. While you may retain the default settings for most of these settings, you can change a few settings during implementation based on your business needs.

    In the Displays section, click Display .

    The Display screen appears.

    Complete the following fields:

    Expected duration comprises [……] % of quota.

    Expected duration comprises [……] % of quota.


    Some more stuff

    Reduce`FreeVariables[expr] returns a List of Symbol s in expr (more info). Unclear. See this for discussion.

    GroupTheory`Tools`MultiSubsets[list, ] , if n + m = Length(list), gives the set of subsets of exactly n elements appended to the set of subsets of exactly m elements in reverse order. (equivalent to MultiSubsets[list_, ] / Length[list] == n + m := Join @@@ Transpose[, Binomial[n + m, n]], Reverse[Subsets[list, , -Binomial[n + m, n]]]>] and not much faster) To figure out: What if n + m ≠ Length(list)?

    GroupTheory`Tools`PartitionRagged[list, ] seems to be equivalent to Internal`PartitionRagged[list, ] , but works even if n1 + n2 . ≠ Length[list] .

    GroupTheory`Tools`IntegerPartitionCounts[n] returns a list of lists corresponding to number (counts) of integers appearing in each partition. (the correspondence with IntegerPartitions[n] appears to be reversed).

    GroupTheory`Tools`ConsecutiveReplace[expr,] replaces elements of expr ( Head usually List ) that match patt1 , patt2 , . with elements of list1 , list2 . in the order they appear in expr . If any of the list1 , list2 , . are exhausted, it wraps around.

    Integrate`InverseIntegrate[expr, ] performs the definite integration by attempting various substitutions of the form u == g[x] where g[x] is an expression in the integrand. (ref) (application) (application)


    Known Issues

    The following are the most significant issues known at the time of the release of v3.2.2. The FAQ area contains an up to date complete list of all known problems. The FAQ area can be found by going to the support section of the OpenSpirit web site (www.openspirit.com/support).

    ArcMap Extension

    • Sending projected grids as a GIS grid selection event does not work properly at this time.
      Workaround: Re-project the grid to lat/long and resend.

    CopyManager

    • Cannot copy from OpenWorks/SeisWorks to OpenWorks/SeisWorks where the source includes projects from the same data store configuration as the target.
      Workaround: Create a mirror data store configuration to use as the target. Please contact [email protected] if you need assistance with this workflow.
    • When copying well or interpretation data into OpenWorks R5000, if the incoming data is assigned to a private interpreter who is not owned by the unix account running the data connector, then the well or interpretation data cannot be copied. Interpreters inserted by the OpenSpirit copy process are, by default, private interpreters.
      Workaround: To address this issue, the interpreter can be set to public, or the unix account could be granted Manage access on the target project.
    • Copy jobs built with v31x or v3.2.0 will not work with v3.2.2. Workaround: change the XML to select the project using the following format instead of referencing projectSets:
      <projectSelection>
      <projectIds>
      <projectId dsin="IE_42" dst="GeoFrame_4" proj="IE_CLOUDSPIN_42"/>
      </projectIds>
      </projectSelection>
    • If path azimuth values for a directional survey is missing in the source database and copied to another data source (e.g. Petra, OpenWorks, Finder), the values for the copied path azimuth might be a few degrees off. The work around is to go into the source database (e.g. Finder, Petra, OpenWorks) for which the azimuth north type values is missing and set it accordingly before copying.

    3D Viewer

    • Images are either not displayed or displayed in a squashed manner depending on graphics card version, graphics driver, and Microsoft patch version.

    Data Connectors

    • Kingdom - Cannot create slice volumes, and cannot create or display brick volumes in Kingdom projects
    • Kingdom - Problem saving non-seismic grids into TKS8.5. Grids are lost after the data connector is shut down. Issue exists for only TKS8.5 - 64bit.
    • Kingdom - The data connector for Kingdom 8.5 (64-bit) Oracle project will not start if OpenSpirit is installed under "Program Files(x86)". The workaround is 1.) install OpenSpirit in a directory without "()". 2.) Install Oracle 11 Win64 Client.
    • OpenWorks 2003.12 -The OpenWorks 2003.12 data connector will avoid a possible corruption problem associated to accessing/creating data in multiple SeisWorks 2d projects and display an appropriate error message like the following: Due to SeisWorks devkit limitations, it is not possible to create 2d seismic files in a SeisWorks 2d project if another SeisWorks 2d project has already been accessed in the same process. Data has already been read from or written to SeisWorks 2d project (swproj1) and a write request to a different SeisWorks 2d project (swproj2) was requested. This request cannot be honored because the OpenWorks 2003.12 data connector will become corrupted and all SeisWorks data access will fail. Possible workarounds: 1) shutdown the OpenWorks 2003.12 data connector after accessing the first SeisWorks 2d project. 2) use different OpenWorks 2003.12 data store installations for each SeisWorks 2d project (assuming the SeisWorks 2d projects are associated to two different OpenWorks projects).
    • OpenWorks 2003.12 - Out of memory error occurs when importing to Petrel multiple horizons from multiple SeisWorks projects that are associated with thousands of 2d lines which are also on multiple SeisWorks projects.
    • OpenWorks 2003.12 -During copy workflow, the TVDSS value in the OpenWorks 2003.12 data connector does not appear to get populated. Stopping and restarting the dataserver will allow the attribute to post. Fixed in OpenWorks R5000.
    • GeoFrame - when a seismic survey is referenced to a Local Coordinate System which is different from the Display Coordinate System of the project, the survey Coordinate System is not honored.
    • Some polylines are missing when both 2d and 3d faults are copied into TKS84. Issue fixed with TKS85. Workaround: copy 2d and 3d faults in separate copy jobs.

    Windows Vista

    • OpenSpirit Master installation are not supported at this time, only Satellite installations.
    • Vista security "locks down" C:Program Files which prevents some Desktop Admin tools from updating changes.
      Workaround: Install into a directory other than C:Program Files

    General

    When installing on 64bit Windows platforms, do not install in the C:Program Files directory. The default (C:Program Files (X86)) should be selected, or any other custom location that is not C:Program Files.

    Grid transfers should be limited to 5 million cells (

    2300x2300) or less. If a grid of more than 5 million cells is required, the grid should be resampled to a coarser cell size in order to reduce the number of grid cells to less than 5 million.

    Broadcasting SLICE and BRICK volumes from the 3D Volume tab in the DataSelector to the Excel Adapter when 3D Volumes are toggled to &ldquoListen&rdquo causes only one record to be received by Excel.
    Workaround: Broadcast the related 3D Survey(s) from the 3D Survey tab and toggle the Excel Adapter to &ldquoFrom 3D Seismic Survey select.&rdquo

    When 3d volumes are selected for live trace coverage scanning and the job is saved to an XML file (to be used for command-line processing), the volumes are not saved in the XML file.
    Workaround: The user must go through the user interface in order to do 3D Volume live trace coverage scanning.


    Watch the video: error