Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Preflight request is not logged to console, double preflight received only starts upload connection for the last one #2965

Closed
olivermt opened this issue Dec 25, 2023 · 8 comments · Fixed by #3004

Comments

@olivermt
Copy link

olivermt commented Dec 25, 2023

Environment

  • Elixir version (elixir -v): 1.15.6
  • Phoenix version (mix deps): 1.7.10
  • Phoenix LiveView version (mix deps): 0.20.2
  • Operating system: Mac
  • Browsers you attempted to reproduce this bug on (the more the merrier): microsoft edge, safari
  • Does the problem persist after removing "assets/node_modules" and trying again? Yes/no: yes, multiple machines

Actual behavior

I am making a "continous uploader" basically using a client side queue and some "send next chunk" functionality. It seems that under load there is some unexpected behaviour. This is especially seen with multiple very small files. I am uploading textures and one big OBJ file.

Basically the gist of it is that sometimes, there is no output of the "sending preflight request" console output. When inspecting the console I see the log for the file before and then the preflight response comes in with two entries. The IEX console then only shows the latter of those two entries connecting and starting the upload.

image

The websocket messages shows this:

["4","35","lv:phx-F6Qn6hRH-dcNhAEE","allow_upload",{"ref":"phx-F6Qn6mUceI6STQEk","entries":[{"last_modified":1700581664000,"name":"2_Part001a_b_Textured_No_visibility_001.jpg","relative_path":"","size":459548,"type":"image/jpeg","ref":"5"}]}]

With the response as expected, one entry for the preflight request.

["4","35","lv:phx-F6Qn6hRH-dcNhAEE","phx_reply",{"status":"ok","response":{"config":{"chunk_size":5242880,"max_file_size":10000000000,"max_entries":1500},"errors":{},"diff":{"2":{"0":{"12":{"1":{"0":{"0":" class=\"space-y-2\" phx-change=\"validate\" phx-submit=\"save\"","1":"","2":"","3":{"0":{"0":{"2":"","4":" data-phx-active-refs=\"0,1,2,3,4,5\"","5":" data-phx-done-refs=\"0,2,4\"","6":" data-phx-preflighted-refs=\"0,1,2,3,4,5\"","8":" class=\"hidden\" multiple"},"1":" disabled","3":{"d":[["1_Part001a_a_Textured_000.jpg"," value=\"100\"","100"," phx-value-ref=\"0\""," phx-value-ref=\"0\"",""],["1_Part001a_a_Textured_001.jpg"," value=\"0\"","0"," phx-value-ref=\"1\""," phx-value-ref=\"1\"",""],["1_Part001a_a_Textured_002.jpg"," value=\"100\"","100"," phx-value-ref=\"2\""," phx-value-ref=\"2\"",""],["1_Part001a_a_Textured_003.jpg"," value=\"0\"","0"," phx-value-ref=\"3\""," phx-value-ref=\"3\"",""],["2_Part001a_b_Textured_No_visibility_000.jpg"," value=\"100\"","100"," phx-value-ref=\"4\""," phx-value-ref=\"4\"",""],["2_Part001a_b_Textured_No_visibility_001.jpg"," value=\"0\"","0"," phx-value-ref=\"5\""," phx-value-ref=\"5\"",""]]},"4":"","7":" disabled"},"1":""}}}}}}},"ref":"phx-F6Qn6mUceI6STQEk","entries":{"5":"SFMyNTY.g2gDaAJhBXQAAAADdwNwaWRYdw1ub25vZGVAbm9ob3N0AAAF-AAAAAAAAAAAdwNyZWZoAm0AAAAUcGh4LUY2UW42bVVjZUk2U1RRRWttAAAAATV3A2NpZHcDbmlsbgYA8gBUoowBYgABUYA.PGFM44bGg5zUnmLQ-VRtWBItg_Qnv0b2xRa_Zd7kfr8"}}}]

Then the next allow upload called from the client is for the file after the stalled one

["4","44","lv:phx-F6Qn6hRH-dcNhAEE","allow_upload",{"ref":"phx-F6Qn6mUceI6STQEk","entries":[{"last_modified":1700581662000,"name":"3_Part001a_c_Textured_No_visibility_000.jpg","relative_path":"","size":615295,"type":"image/jpeg","ref":"7"}]}]

With a reply containing two entries:

["4","44","lv:phx-F6Qn6hRH-dcNhAEE","phx_reply",{"status":"ok","response":{"config":{"chunk_size":5242880,"max_file_size":10000000000,"max_entries":1500},"errors":{},"diff":{"2":{"0":{"12":{"1":{"0":{"0":" class=\"space-y-2\" phx-change=\"validate\" phx-submit=\"save\"","1":"","2":"","3":{"0":{"0":{"2":"","4":" data-phx-active-refs=\"0,1,2,3,4,5,6,7\"","5":" data-phx-done-refs=\"0,1,2,3,4\"","6":" data-phx-preflighted-refs=\"0,1,2,3,4,5,6,7\"","8":" class=\"hidden\" multiple"},"1":" disabled","3":{"d":[["1_Part001a_a_Textured_000.jpg"," value=\"100\"","100"," phx-value-ref=\"0\""," phx-value-ref=\"0\"",""],["1_Part001a_a_Textured_001.jpg"," value=\"100\"","100"," phx-value-ref=\"1\""," phx-value-ref=\"1\"",""],["1_Part001a_a_Textured_002.jpg"," value=\"100\"","100"," phx-value-ref=\"2\""," phx-value-ref=\"2\"",""],["1_Part001a_a_Textured_003.jpg"," value=\"100\"","100"," phx-value-ref=\"3\""," phx-value-ref=\"3\"",""],["2_Part001a_b_Textured_No_visibility_000.jpg"," value=\"100\"","100"," phx-value-ref=\"4\""," phx-value-ref=\"4\"",""],["2_Part001a_b_Textured_No_visibility_001.jpg"," value=\"0\"","0"," phx-value-ref=\"5\""," phx-value-ref=\"5\"",""],["2_Part001a_b_Textured_No_visibility_002.jpg"," value=\"0\"","0"," phx-value-ref=\"6\""," phx-value-ref=\"6\"",""],["3_Part001a_c_Textured_No_visibility_000.jpg"," value=\"0\"","0"," phx-value-ref=\"7\""," phx-value-ref=\"7\"",""]]},"4":"","7":" disabled"},"1":""}}}}}}},"ref":"phx-F6Qn6mUceI6STQEk","entries":{"6":"SFMyNTY.g2gDaAJhBXQAAAADdwNwaWRYdw1ub25vZGVAbm9ob3N0AAAF-AAAAAAAAAAAdwNyZWZoAm0AAAAUcGh4LUY2UW42bVVjZUk2U1RRRWttAAAAATZ3A2NpZHcDbmlsbgYAjwpUoowBYgABUYA.IkSJ0o50hKBYHiY_6Pp80pVFE0bbnlEWK2q26Pnhsm8","7":"SFMyNTY.g2gDaAJhBXQAAAADdwNwaWRYdw1ub25vZGVAbm9ob3N0AAAF-AAAAAAAAAAAdwNyZWZoAm0AAAAUcGh4LUY2UW42bVVjZUk2U1RRRWttAAAAATd3A2NpZHcDbmlsbgYAkApUoowBYgABUYA.L7bDpzLQnFVjH8sL0DbPlfpUeCV788pTosaKwjwPbys"}}}]

Interestingly you can see the stalled file in the diff with the correct ref. However the entry for ref 6 (stalled file) is simply ignored.

I hope this is detailed enough for you to understand where in the javascript code(?) there is a bug and how the liveview ends up with the correct list of entries/refs and even a responded entry when there is not a single trace of the stalled file being sent up as an event.

My clientside code is extremely terse and just listens to an even that is sent based on handle_progress on the serverside.
Pertinent entries here:
Hook:


export const QueuedUploaderHook = {
  async mounted() {

    const maxConcurrency = this.el.dataset.maxConcurrency || 5;
    let filesRemaining: Array<File> = [];
    this.el.addEventListener("input", async (event: Event) => {
      event.preventDefault()

      if (event.target instanceof HTMLInputElement) {
        const files_html = event.target.files;
        if (files_html) {

          const files = Array.from(files_html);
          filesRemaining = files;
          const firstFiles = files.slice(0, maxConcurrency);
          this.upload("files", firstFiles);

          filesRemaining.splice(0, maxConcurrency);
        }
      }
    });

    this.handleEvent("upload_send_next_file", () => {
      // console.log("Uploading more files! Remainder:", filesRemaining);

      if (filesRemaining.length > 0) {
        const nextFile = filesRemaining.shift();
        this.upload("files", [nextFile]);
      } else {
        console.log("Done uploading, noop!");
      }
    });
  }
}

Pertinent parts of the liveview:

|> allow_upload(:files,
          accept: :any,
          max_entries: 1500,
          # minimum 5 mb for multipart
          chunk_size: 5 * 1_024 * 1_024,
          max_file_size: 10_000_000_000,
          auto_upload: true,
          writer: &r2_writer/3,
          progress: &handle_progress/3
        )

defp r2_writer(_name, %Phoenix.LiveView.UploadEntry{} = entry, socket) do
    s3_opts = [content_type: entry.client_type]

    {
      S3UploadWriter,
      provider: :r2,
      path_prefix: socket.assigns.path_prefix,
      name: entry.client_name,
      s3_opts: s3_opts
    }
  end

  def handle_progress(:files, entry, socket) do
    if entry.done? && Enum.count(socket.assigns.uploads.files.entries, fn e -> !e.done? end) < 3 do
      {:noreply, push_event(socket, "upload_send_next_file", %{})}
    else
      {:noreply, socket}
    end
  end

I am omitting the upload writer, it should be irrelevant. I wrote filenames to a log file in the init of the writer, and the stalled file never made it there. Which makes sense because the upload thing never connected and produced a [info] JOINED lvu:36 in 197ms Parameters: %{"token" => "SFMyNTY.g2gDaAJhBXQAAAADdwNwaWRYdw1ub25vZGVAbm9ob3N0AAAF-AAAAAAAAAAAdwNyZWZoAm0AAAAUcGh4LUY2UW42bVVjZUk2U1RRRWttAAAAAjM2dwNjaWR3A25pbG4GAJt1VKKMAWIAAVGA.XG--dovnR7h0RqqOY_3i4divuMF1_fJFFPh9gqJUb6k"} like message (which is the process the upload writer is initialized and lives).

Expected behavior

Uhm, good question.
Either that the preflight is sent like it should or that the client can handle getting a double preflight response entry thing back.

Edit: Removed the first part of the issue that was based on screenshots, because apparently the screenshots didn't work and the work of pulling this out of the websocket log gave good insight anyways

@olivermt
Copy link
Author

I have tried reproducing this with a standalone project but it was a lot more stuff to wire up than I thought, so I'll let you at least read the issue and maybe ask for more feedback before I spend the hours to do that 😅

@olivermt
Copy link
Author

olivermt commented Dec 28, 2023

Something is wonky in internal state here it seems.
I implemented a "retryer".

I modified my hook to keep a copy of the original file list and be able to retry uploading from it if a file is stalled.

export const QueuedUploaderHook = {
  async mounted() {

    const maxConcurrency = this.el.dataset.maxConcurrency || 5;
    let filesRemaining: Array<File> = [];
    let allFiles: Array<File> = [];

    this.el.addEventListener("input", async (event: Event) => {
      event.preventDefault()

      if (event.target instanceof HTMLInputElement) {
        const files_html = event.target.files;
        if (files_html) {

          const files = Array.from(files_html);
          filesRemaining = files;
          allFiles = [...files];
          const firstFiles = files.slice(0, maxConcurrency);
          this.upload("files", firstFiles);

          filesRemaining.splice(0, maxConcurrency);
        }
      }
    });

    this.handleEvent("retry_file", ({ name }) => {
      console.log("Trying to reupload file: ", name);
      const file = allFiles.find((file) => file.name == name)

      console.log("allfiles", allFiles);
      if (file != undefined) {
        this.upload("files", [file]);
      } else {
        console.log("file not found in original files?!");
      }
    });

    this.handleEvent("upload_send_next_file", () => {
      // console.log("Uploading more files! Remainder:", filesRemaining);

      if (filesRemaining.length > 0) {
        const nextFile = filesRemaining.shift();
        console.log("Uploading file: ", nextFile.name);
        this.upload("files", [nextFile]);
      } else {
        console.log("Done uploading, noop!");
      }
    });
  }
}

Retry handler:

  def handle_event("retry-upload", %{"ref" => ref}, socket) do
    Enum.find(socket.assigns.uploads.files.entries, fn f -> f.ref == ref end)
    |> IO.inspect(label: "file")
    |> case do
      nil ->
        {:noreply, socket}

      file ->
        socket =
          cancel_upload(socket, :files, ref)
          |> push_event("retry_file", %{name: file.client_name})

        {:noreply, socket}
    end
  end

Doing that only triggers validate, and nothing else. As you can see in the logs below there is a long list of files in the ´:files` upload, but not the retried file, so the cancel upload removes it from the state as it should, but still does not want to re-upload it.

["4","173","lv:phx-F6T7oC6FHFw9ICli","event",{"type":"click","event":"retry-upload","value":{"ref":"6","value":""}}]


["4","173","lv:phx-F6T7oC6FHFw9ICli","phx_reply",{"status":"ok","response":{"diff":{"2":{"0":{"12":{"1":{"0":{"0":" class=\"space-y-2\" phx-change=\"validate\" phx-submit=\"save\"","1":"","2":"","3":{"0":{"0":{"2":"","4":" data-phx-active-refs=\"0,1,2,3,4,5,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24\"","5":" data-phx-done-refs=\"0,1,2,3,4,5,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24\"","6":" data-phx-preflighted-refs=\"0,1,2,3,4,5,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24\"","8":" class=\"hidden\" multiple"},"1":"","3":{"d":[[... long list of files and refs... not containing the retried file...."]]},"4":"","7":""},"1":""}}}}}},"e":[["retry_file",{"name":"2_Part001a_b_Textured_No_visibility_002.jpg"}]]}}}]


["4","174","lv:phx-F6T7oC6FHFw9ICli","event",{"type":"form","event":"validate","value":"_target=files","uploads":{}}]
["4","174","lv:phx-F6T7oC6FHFw9ICli","phx_reply",{"status":"ok","response":{}}]

The only somewhat user friendly "workaround" for now is to let users delete files, then track those deleted files in a text list close to the file picker, so they can see what files they manually need to re-choose. Or alternatively I need to implement de-duping so they can just choose the whole folder again and we'll simply skip any items that are .done?.

@olivermt
Copy link
Author

Hello again!

I had time to make a minimal repro (it is insane how nice it is to do this quickly with elixirscript):

Application.put_env(:sample, Example.Endpoint,
  http: [ip: {127, 0, 0, 1}, port: 5001],
  server: true,
  live_view: [signing_salt: "aaaaaaaa"],
  secret_key_base: String.duplicate("a", 64),
  pubsub_server: Example.PubSub
)

Mix.install([
  {:plug_cowboy, "~> 2.5"},
  {:jason, "~> 1.2"},
  {:phoenix, "~> 1.7.7"},
  {:phoenix_live_view, "~> 0.20.2"}
])

defmodule Example.ErrorView do
  def render(template, _), do: Phoenix.Controller.status_message_from_template(template)
end

defmodule Example.NoOpWriter do
  @behaviour Phoenix.LiveView.UploadWriter

  @impl true
  def init(opts) do
    {:ok, %{parts: [], part_number: 1}}
  end

  @impl true
  def meta(state), do: state

  @impl true
  def write_chunk(data, state) do
    %{part_number: part_number} = state
    part = "foo"
    {:ok, %{state | parts: [part | state.parts], part_number: part_number + 1}}
  end

  def close(state, :cancel) do
    {:ok, :aborted}
  end

  @impl true
  def close(state, :done) do
    {:ok, %{}}
  end
end

defmodule Example.CoreComponents do
  use Phoenix.Component
  attr(:for, :any, required: true, doc: "the datastructure for the form")
  attr(:as, :any, default: nil, doc: "the server side parameter to collect all input under")

  attr(:rest, :global,
    include: ~w(autocomplete name rel action enctype method novalidate target multipart),
    doc: "the arbitrary HTML attributes to apply to the form tag"
  )

  slot(:inner_block, required: true)
  slot(:actions, doc: "the slot for form actions, such as a submit button")

  def simple_form(assigns) do
    ~H"""
    <.form :let={f} for={@for} as={@as} {@rest}>
      <div>
        <%= render_slot(@inner_block, f) %>
        <div :for={action <- @actions}>
          <%= render_slot(action, f) %>
        </div>
      </div>
    </.form>
    """
  end
end

defmodule Example.UploadLive do
  use Phoenix.LiveView, layout: {__MODULE__, :live}

  import Example.CoreComponents

  def mount(_params, _session, socket) do
    socket =
      socket
      |> allow_upload(:files,
        accept: :any,
        max_entries: 1500,
        # minimum 5 mb for multipart
        chunk_size: 5 * 1_024 * 1_024,
        max_file_size: 10_000_000_000,
        auto_upload: true,
        writer: &r2_writer/3,
        progress: &handle_progress/3
      )
      |> assign(:form, to_form(%{}))

    {:ok, socket}
  end

  defp phx_vsn, do: Application.spec(:phoenix, :vsn)
  defp lv_vsn, do: Application.spec(:phoenix_live_view, :vsn)

  def render("live.html", assigns) do
    ~H"""
    <script src={"https://cdn.jsdelivr.net/npm/phoenix@#{phx_vsn()}/priv/static/phoenix.min.js"}></script>
    <script src={"https://cdn.jsdelivr.net/npm/phoenix_live_view@#{lv_vsn()}/priv/static/phoenix_live_view.min.js"}></script>
    <script>
      const QueuedUploaderHook = {
        async mounted() {
          const maxConcurrency = this.el.dataset.maxConcurrency || 3;
          let filesRemaining = [];

          this.el.addEventListener("input", async (event) => {
            event.preventDefault()

            if (event.target instanceof HTMLInputElement) {
              const files_html = event.target.files;
              if (files_html) {

                const rawFiles = Array.from(files_html);
                console.log("raw files", rawFiles);
                const fileNames = rawFiles.map((f) => {
                  return f.name;
                });

                this.pushEvent("upload_scrub_list", { file_names: fileNames }, ({ deduped_filenames }, ref) => {
                  console.log("deduped filenames", deduped_filenames);
                  const files = rawFiles.filter((f) => {
                    return deduped_filenames.includes(f.name);
                  });
                  console.log("scrubbed files", files);
                  filesRemaining = files;
                  const firstFiles = files.slice(0, maxConcurrency);
                  this.upload("files", firstFiles);

                  filesRemaining.splice(0, maxConcurrency);
                });

              }
            }
          });

          this.handleEvent("upload_send_next_file", () => {
            // console.log("Uploading more files! Remainder:", filesRemaining);

            if (filesRemaining.length > 0) {
              const nextFile = filesRemaining.shift();
              if (nextFile != undefined) {
                console.log("Uploading file: ", nextFile.name);
                this.upload("files", [nextFile]);
              }
            } else {
              console.log("Done uploading, noop!");
            }
          });
        }
      };
      let liveSocket = new window.LiveView.LiveSocket("/live", window.Phoenix.Socket, {hooks: {QueuedUploaderHook}});
      liveSocket.connect();
    </script>

    <%= @inner_content %>
    """
  end

  def render(assigns) do
    ~H"""
    <main>
      <h1>Uploader reproduction</h1>
      <.simple_form for={@form} phx-submit="save" phx-change="validate">
        <%!-- use phx-drop-target with the upload ref to enable file drag and drop --%>
        <%!-- phx-drop-target={@uploads.files.ref} --%>
        <section>
          <.live_file_input upload={@uploads.files} style="display: none;" />
          <input
            id="fileinput"
            type="file"
            multiple
            phx-hook="QueuedUploaderHook"
            disabled={file_picker_disabled?(@uploads)}
          />
          <h2 :if={length(@uploads.files.entries) > 0}>Currently uploading files</h2>
          <div>
            <table>
              <!-- head -->
              <thead>
                <tr>
                  <th>File Name</th>
                  <th>Progress</th>
                  <th>Cancel</th>
                  <th>Errors</th>
                </tr>
              </thead>
              <tbody>
                <%= for entry <- uploads_in_progress(@uploads) do %>
                  <tr>
                    <td><%= entry.client_name %></td>
                    <td>
                      <progress value={entry.progress} max="100">
                        <%= entry.progress %>%
                      </progress>
                    </td>

                    <td>
                      <%!-- <button
                        type="button"
                        phx-click="retry-upload"
                        phx-value-ref={entry.ref}
                        aria-label="cancel"
                      >
                        <i class="fa-solid fa-arrow-rotate-right"></i>
                      </button> --%>
                      <button
                        type="button"
                        phx-click="cancel-upload"
                        phx-value-ref={entry.ref}
                        aria-label="cancel"
                      >
                        <span>&times;</span>
                      </button>
                    </td>
                    <td>
                      <%= for err <- upload_errors(@uploads.files, entry) do %>
                        <p style="color: red;"><%= error_to_string(err) %></p>
                      <% end %>
                    </td>
                  </tr>
                <% end %>
              </tbody>
            </table>
          </div>
          <%!-- Phoenix.Component.upload_errors/1 returns a list of error atoms --%>
          <%= for err <- upload_errors(@uploads.files) do %>
            <p style="text-red"><%= error_to_string(err) %></p>
          <% end %>
        </section>
      </.simple_form>
    </main>
    """
  end

  def handle_progress(
        :files,
        entry,
        %{
          assigns: %{
            uploads: %{files: %{entries: entries}}
          }
        } =
          socket
      ) do
    Enum.count(entries, fn e -> !e.done? end)
    |> IO.inspect(label: "not done count")

    add_count =
      max(3 - Enum.count(entries, fn e -> !e.done? end), 0) |> IO.inspect(label: "add counter")

    add_counter_range = Range.new(0, add_count)

    socket =
      Enum.reduce(add_counter_range, socket, fn _, acc_socket ->
        push_event(acc_socket, "upload_send_next_file", %{})
      end)

    {:noreply, socket}
  end

  # This dedupes against s3, just doing a no-op here to preserve the original uploader js code
  def handle_event(
        "upload_scrub_list",
        %{"file_names" => file_names},
        socket
      ) do
    {:reply, %{deduped_filenames: file_names}, socket}
  end

  def handle_event("validate", _params, socket) do
    {:noreply, socket}
  end

  def handle_event("cancel-upload", %{"ref" => ref}, socket) do
    file = Enum.find(socket.assigns.uploads.files.entries, fn f -> f.ref == ref end)

    {:noreply, cancel_upload(socket, :files, ref)}
  end

  def handle_event("save", _params, socket) do
    # consume_uploaded_entries(socket, :files, fn _info, _entry ->
    #   {:ok, %{}}
    # end)

    {:noreply, socket}
  end

  def error_to_string(:too_large), do: "Too large"
  def error_to_string(:not_accepted), do: "You have selected an unacceptable file type"
  def error_to_string(:s3_error), do: "Error on writing to cloudflare"

  def error_to_string(unknown) do
    IO.inspect(unknown, label: "Unknown error")
    "unknown error"
  end

  ## Helpers

  defp submit_disabled?(uploads, staged_files) do
    cond do
      Enum.any?(uploads.files.entries, fn entry -> entry.done? == false end) ->
        true

      length(uploads.files.entries) + length(staged_files) < 1 ->
        true

      true ->
        false
    end
  end

  defp file_picker_disabled?(uploads) do
    Enum.any?(uploads.files.entries, fn e -> !e.done? end)
  end

  defp r2_writer(_name, %Phoenix.LiveView.UploadEntry{} = entry, socket) do
    {
      Example.NoOpWriter,
      provider: :r2, name: entry.client_name
    }
  end

  defp progress(total, todo) do
    completed = total - todo
    (completed / total) |> Kernel.*(100) |> trunc()
  end

  defp uploads_in_progress(uploads) do
    uploads.files.entries
  end
end

defmodule Example.Router do
  use Phoenix.Router
  import Phoenix.LiveView.Router

  pipeline :browser do
    plug(:accepts, ["html"])
  end

  scope "/", Example do
    pipe_through(:browser)

    live("/", UploadLive, :index)
  end
end

defmodule Example.Endpoint do
  use Phoenix.Endpoint, otp_app: :sample
  socket("/live", Phoenix.LiveView.Socket)
  plug(Example.Router)
end

{:ok, _} =
  Supervisor.start_link(
    [Example.Endpoint, {Phoenix.PubSub, [name: Example.PubSub, adapter: Phoenix.PubSub.PG2]}],
    strategy: :one_for_one
  )

Process.sleep(:infinity)

To use, simply download the random files zip file attached (it is 10 files of 150kb with randomly generated content).
random files.zip

Select all of them for upload and you should be getting something similar to this:
image

Just with other placements.

@olivermt
Copy link
Author

olivermt commented Jan 10, 2024

From my observations, there seems to be some kind of race condition here, because with the no-op writer that does not go to R2(S3), it happens waaay more frequent. The largest amount I saw was four entries in one preflight response.

Edit: Also ignore the fact that I am not using / setting the max concurrency stuff and just hardcode 3 in both ends. I tried as best I could to pull this out of a real app to reproduce it properly :P

@SteffenDE
Copy link
Collaborator

Please try {:phoenix_live_view, github: "SteffenDE/phoenix_live_view", branch: "issue_2965_assets", override: true} and see if that solves your problem.

@olivermt
Copy link
Author

Hello, same issue, multiple entries come back and the liveview js code does not seem to handle this:

image

It is interesting that it seems to be fairly deterministic that it always fails on file nr 4. I initially feed 3 files to the uploader and then feed one and one more.

I will have more time later this week to try to do some pointed debug-logging and see if I can figure out why it doesn't properly handle multiple preflight entry responses.

@SteffenDE
Copy link
Collaborator

If that’s in your single file script make sure to also update the <script> src entry:
<script src="https://cdn.jsdelivr.net/gh/SteffenDE/phoenix_live_view@issue_2965_assets/priv/static/phoenix_live_view.js"></script>.

chrismccord added a commit that referenced this issue Jan 16, 2024
* track in-progress preflights (fixes #2965)

* Update assets/js/phoenix_live_view/upload_entry.js

---------

Co-authored-by: Chris McCord <[email protected]>
@olivermt
Copy link
Author

@SteffenDE indeed, giga brainfart there, it works like a charm!!! thanks a dozen :)

chrismccord added a commit that referenced this issue Jan 17, 2024
* track in-progress preflights (fixes #2965)

* Update assets/js/phoenix_live_view/upload_entry.js

---------

Co-authored-by: Chris McCord <[email protected]>
SteffenDE added a commit to SteffenDE/phoenix_live_view that referenced this issue Feb 24, 2024
SteffenDE added a commit to SteffenDE/phoenix_live_view that referenced this issue Feb 24, 2024
SteffenDE added a commit to SteffenDE/phoenix_live_view that referenced this issue Feb 24, 2024
chrismccord pushed a commit that referenced this issue Feb 28, 2024
* Properly track preflighted uploads on the server

Fixes #3115.
Relates to #3004.
Partially fixes #2835.

* add test for #2965, #3115
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants